This week’s hardware news recap primarily focuses on Intel’s Minix implementation, alongside creator Andrew Tanenbaum’s thoughts on the unknown adoption of the OS, along with some new information on the AMD + Intel multi-chip module (MCM) that’s coming to market. Supporting news items for the week include some GN-style commentary of a new “gaming” chair with case fans in it, updates on nVidia quarterly earnings, Corsair’s new “fastest” memory, and EK’s 560mm radiators.
Find the show notes after the embedded video.
Everyone’s been asking why the GTX 1070 Ti exists, noting that the flanking GTX 1080 and GTX 1070 cards largely invalidated its narrow price positioning. In a span of $100-$150, nVidia manages to segment three products, thus spurring the questions. We think the opposite: The 1070 Ti has plenty of reason to exist, but the 1080 is the now less-desirable of the options. Regardless of which (largely irrelevant) viewpoint you take, there is now a 1070, a 1070 Ti, and a 1080, and they’re all close enough that one doesn’t need to live. One should die – it’s just a matter of which. The 1070 doesn’t make sense to be killed – it’s too far from the GTX 1080, at 1920 vs. 2560 cores, and fills a lower-end market. The 1070 Ti is brand new, so that’s not dying today. The 1080, though, has been encroached upon by the 1070 Ti, just one SM and some Micron memory shy of being a full ten digits higher in numerical nomenclature.
For the basics, the GTX 1070 Ti is functionally a GTX 1080, just with one SM neutered. NVidia has removed a single simultaneous multiprocessor, which contains 128 CUDA cores and 12 texture map units, and has therefore dropped us down to 2432 CUDA cores total. This is in opposition to 2560 cores on the 1080 and 1920 cores on the 1070. The GTX 1070 Ti is much closer in relation to a 1080 than a 1070, and its $450-$480 average list price reinforces that, as GTX 1080s were available in that range before the mining explosion (when on sale, granted).
Buildzoid returns with an analysis of the Colorful GTX 1070 Ti Vulcan X PCB and VRM, including some brief discussion on shorting the shunts of the new 1070 Ti card. Colorful is attempting to get into the Western market, and the GTX 1070 Ti launch will be their maiden voyage in that attempt. We received the Vulcan X card first -- for which we presently have no MSRP -- and tore it down a few days ago. Our PCB analysis, embedded below, takes an XOCer's look at the VRM quality and implementation.
Learn more below:
NVIDIA just posted its 388.10 drivers for Wolfenstein II, building on the earlier-launched 388.0 driver update for Destiny II. Aside from hotfixes, the driver package does not change any core functionality or performance of nVidia GTX cards. This is similar to AMD's latest hotfix for its Vega cards on Destiny II: Only download and install 388.10 if you are actively running into issues with the game at hand.
On its forums, an nVidia representative posted:
Along with the announcement of the nVidia GTX 1070 Ti, AMD officially announced its Raven Ridge and Ryzen Mobile products, shortly after a revenue-positive quarterly earnings report. This week has been a busy one for hardware news, as these announcements were accompanied by news of the PCIe 4.0 specification v1.0 document finalization, as PCI-SIG now ramps the PCIe 5.0 spec into design.
Show notes are listed below, with the video here:
NVidia’s much-rumored GTX 1070 Ti will launch on November 2, 2017, with initial information disseminated today. The 1070 Ti uses a GP104-300 GPU, slotted between the GP104-400 and GP104-200 of the 1080 and 1070 (respectively), and therefore uses the same silicon as we’ve seen before. This is likely the final Pascal launch before leading into Volta, and is seemingly the response to AMD’s Vega 56 challenger of the GTX 1070 non-Ti.
The 1070 Ti is slightly cut-down from the 1080, the former of which runs 19 SMs for 2432 CUDA cores (at 128 shaders per SM), with the latter running 20 SMs. The result is what will likely amount to clock differences, primarily, as the 1070 Ti operates 1607/1683MHz for its clock speeds, and AIB partners are not permitted to offer pre-overclocked versions. For all intents and purposes, outside of the usual cooling, VRM, and silicon quality differences (random, at best), all AIB partner cards will perform identically in out-of-box states. Silicon quality will amount to the biggest differences, with cooler quality – anything with an exceptionally bad cooler, primarily – differentiating the rest.
As we understand it now, users will be able to manually overclock the 1070 Ti with software. See the specs below:
As stated in the video intro, this benchmark contains some cool data that was exciting to work with. We don’t normally accumulate enough data to run historical trend plots across various driver or game revisions, but our initial Destiny 2 pre-launch benchmarks enabled us to compare that data against the game’s official launch. Bridging our pre-launch beta benchmarks with similar testing methods for the Destiny 2 PC launch, including driver changes, makes it easier to analyze the deviation between CPU, driver, and game code optimizations.
Recapping the previous tests, we already ran a wide suite of Destiny 2 benchmarks that included performance scaling tests in PvP multiplayer, campaign/co-op multiplayer, and various levels/worlds in the game. Find some of that content below:
- Destiny 2 Beta GPU Benchmark (+ graphics optimization guide, PvP scalability)
- Destiny 2 Beta CPU Benchmark (soon replaced by our Destiny 2 launch CPU bench)
- Destiny 2 texture comparison
NOTE: Our Destiny 2 CPU benchmark is now live.
Some of our original graphics optimization work also carried forward, allowing us to better pinpoint Depth of Field on Highest as one of the major culprits to AMD’s performance. This has changed somewhat with launch, as you’ll find below.
We’re sticking with FXAA for testing. Bungie ended up removing MSAA entirely, as the technique has been buggy since the beta, and left only SMAA and FXAA in its place.
Following-up our tear-down of the ASUS ROG Strix Vega 64 graphics card, Buildzoid of Actually Hardcore Overclocking now visits the PCB for an in-depth VRM & PCB analysis. The big question was whether ASUS could reasonably outdo AMD's reference design, which is shockingly good for a card with such a bad cooler. "Reasonably," in this sentence, means "within reasonable cost" -- there's not much price-to-performance headroom with Vega, so any custom cards will have to keep MSRP as low as possible while still iterating on the cooler.
The PCB & VRM analysis is below, but we're still on hold for performance testing. As of right now, we are waiting on ASUS to finalize its VBIOS for best compatibility with AMD's drivers. It seems that there is some more discussion between AIB partners and AMD for this generation, which is introducing a bit of latency on launches. For now, here's the PCB analysis -- timestamps are on the left-side of the video:
We’ve already sent off the information contained in this video to Buildzoid, who has produced a PCB & VRM analysis of the ROG Strix Vega 64 by ASUS. That content will go live within the next few days, and will talk about whether the Strix card manages to outmatch AMD’s already-excellent reference PCB design for Vega. Stay tuned for that.
In the meantime, the below is a discussion of the cooling solution and disassembly process for the ASUS ROG Strix Vega 64 card. For cooling, ASUS is using a similar triple-fan solution that we highly praised in its 1080 Ti Strix model (remarkable for its noise-normalized cooling performance), along with similar heatsink layout.
Learn more here:
We’re winding-down coverage of Vega, at this point, but we’ve got a couple more curiosities to explore. This content piece looks at a mix of clock scalability for Vega across a few key clocks (for core and HBM2), and hopes to constrain for a CU difference, to some extent. We obviously can’t fully control down to the shader level (as CUs carry more than just shaders), but we can get close to it. Note that the video content does generally refer to the V56 & V64 difference as one of shaders, but more than shaders are contained in the CUs.
In our initial AMD shader comparison between Vega 56 and Vega 64, we saw nearly identical performance between the cards when clock-matched to roughly 1580~1590MHz core and 945MHz HBM2. We’re now exploring performance across a range of frequency settings, from ~1400MHz core to ~1660MHz core, and from 800MHz HBM2 to ~1050MHz HBM2.
This content piece was originally written and filmed about ten days ago, ready to go live, but we then decided to put a hold on the content and update it. Against initial plans, we ended up flashing V64 VBIOS onto the V56 to give us more voltage headroom for HBM2 clocks, allowing us to get up to 1020MHz easily on V56. There might be room in there for a bit more of an OC, but 1020MHz proved stable on both our V64 and V56 cards, making it easy to test the two comparatively.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.