NVIDIA’s Battlefront II Game Ready driver version 388.31 shipped this week in preparation for the game’s worldwide launch. In possibly more positive news for the vast number of redditors enraged by EA’s defense of grinding, the driver is also updated for Injustice 2 compatibility and boasts double-digit % performance increases in Destiny 2 at higher resolutions.

Battlefront 2 is the headliner for this driver release, but this chart is about all NVIDIA has to say on the subject for now:

This week’s hardware news recap primarily focuses on Intel’s Minix implementation, alongside creator Andrew Tanenbaum’s thoughts on the unknown adoption of the OS, along with some new information on the AMD + Intel multi-chip module (MCM) that’s coming to market. Supporting news items for the week include some GN-style commentary of a new “gaming” chair with case fans in it, updates on nVidia quarterly earnings, Corsair’s new “fastest” memory, and EK’s 560mm radiators.

Find the show notes after the embedded video.

Everyone’s been asking why the GTX 1070 Ti exists, noting that the flanking GTX 1080 and GTX 1070 cards largely invalidated its narrow price positioning. In a span of $100-$150, nVidia manages to segment three products, thus spurring the questions. We think the opposite: The 1070 Ti has plenty of reason to exist, but the 1080 is the now less-desirable of the options. Regardless of which (largely irrelevant) viewpoint you take, there is now a 1070, a 1070 Ti, and a 1080, and they’re all close enough that one doesn’t need to live. One should die – it’s just a matter of which. The 1070 doesn’t make sense to be killed – it’s too far from the GTX 1080, at 1920 vs. 2560 cores, and fills a lower-end market. The 1070 Ti is brand new, so that’s not dying today. The 1080, though, has been encroached upon by the 1070 Ti, just one SM and some Micron memory shy of being a full ten digits higher in numerical nomenclature.

For the basics, the GTX 1070 Ti is functionally a GTX 1080, just with one SM neutered. NVidia has removed a single simultaneous multiprocessor, which contains 128 CUDA cores and 12 texture map units, and has therefore dropped us down to 2432 CUDA cores total. This is in opposition to 2560 cores on the 1080 and 1920 cores on the 1070. The GTX 1070 Ti is much closer in relation to a 1080 than a 1070, and its $450-$480 average list price reinforces that, as GTX 1080s were available in that range before the mining explosion (when on sale, granted).

Buildzoid returns with an analysis of the Colorful GTX 1070 Ti Vulcan X PCB and VRM, including some brief discussion on shorting the shunts of the new 1070 Ti card. Colorful is attempting to get into the Western market, and the GTX 1070 Ti launch will be their maiden voyage in that attempt. We received the Vulcan X card first -- for which we presently have no MSRP -- and tore it down a few days ago. Our PCB analysis, embedded below, takes an XOCer's look at the VRM quality and implementation.

Learn more below:

NVIDIA just posted its 388.10 drivers for Wolfenstein II, building on the earlier-launched 388.0 driver update for Destiny II. Aside from hotfixes, the driver package does not change any core functionality or performance of nVidia GTX cards. This is similar to AMD's latest hotfix for its Vega cards on Destiny II: Only download and install 388.10 if you are actively running into issues with the game at hand.

On its forums, an nVidia representative posted:

Along with the announcement of the nVidia GTX 1070 Ti, AMD officially announced its Raven Ridge and Ryzen Mobile products, shortly after a revenue-positive quarterly earnings report. This week has been a busy one for hardware news, as these announcements were accompanied by news of the PCIe 4.0 specification v1.0 document finalization, as PCI-SIG now ramps the PCIe 5.0 spec into design.

Show notes are listed below, with the video here:

NVidia’s much-rumored GTX 1070 Ti will launch on November 2, 2017, with initial information disseminated today. The 1070 Ti uses a GP104-300 GPU, slotted between the GP104-400 and GP104-200 of the 1080 and 1070 (respectively), and therefore uses the same silicon as we’ve seen before. This is likely the final Pascal launch before leading into Volta, and is seemingly the response to AMD’s Vega 56 challenger of the GTX 1070 non-Ti.

The 1070 Ti is slightly cut-down from the 1080, the former of which runs 19 SMs for 2432 CUDA cores (at 128 shaders per SM), with the latter running 20 SMs. The result is what will likely amount to clock differences, primarily, as the 1070 Ti operates 1607/1683MHz for its clock speeds, and AIB partners are not permitted to offer pre-overclocked versions. For all intents and purposes, outside of the usual cooling, VRM, and silicon quality differences (random, at best), all AIB partner cards will perform identically in out-of-box states. Silicon quality will amount to the biggest differences, with cooler quality – anything with an exceptionally bad cooler, primarily – differentiating the rest.

As we understand it now, users will be able to manually overclock the 1070 Ti with software. See the specs below:

As stated in the video intro, this benchmark contains some cool data that was exciting to work with. We don’t normally accumulate enough data to run historical trend plots across various driver or game revisions, but our initial Destiny 2 pre-launch benchmarks enabled us to compare that data against the game’s official launch. Bridging our pre-launch beta benchmarks with similar testing methods for the Destiny 2 PC launch, including driver changes, makes it easier to analyze the deviation between CPU, driver, and game code optimizations.

Recapping the previous tests, we already ran a wide suite of Destiny 2 benchmarks that included performance scaling tests in PvP multiplayer, campaign/co-op multiplayer, and various levels/worlds in the game. Find some of that content below:

NOTE: Our Destiny 2 CPU benchmark is now live.

Some of our original graphics optimization work also carried forward, allowing us to better pinpoint Depth of Field on Highest as one of the major culprits to AMD’s performance. This has changed somewhat with launch, as you’ll find below.

We’re sticking with FXAA for testing. Bungie ended up removing MSAA entirely, as the technique has been buggy since the beta, and left only SMAA and FXAA in its place.

Following-up our tear-down of the ASUS ROG Strix Vega 64 graphics card, Buildzoid of Actually Hardcore Overclocking now visits the PCB for an in-depth VRM & PCB analysis. The big question was whether ASUS could reasonably outdo AMD's reference design, which is shockingly good for a card with such a bad cooler. "Reasonably," in this sentence, means "within reasonable cost" -- there's not much price-to-performance headroom with Vega, so any custom cards will have to keep MSRP as low as possible while still iterating on the cooler.

The PCB & VRM analysis is below, but we're still on hold for performance testing. As of right now, we are waiting on ASUS to finalize its VBIOS for best compatibility with AMD's drivers. It seems that there is some more discussion between AIB partners and AMD for this generation, which is introducing a bit of latency on launches. For now, here's the PCB analysis -- timestamps are on the left-side of the video:

We’ve already sent off the information contained in this video to Buildzoid, who has produced a PCB & VRM analysis of the ROG Strix Vega 64 by ASUS. That content will go live within the next few days, and will talk about whether the Strix card manages to outmatch AMD’s already-excellent reference PCB design for Vega. Stay tuned for that.

In the meantime, the below is a discussion of the cooling solution and disassembly process for the ASUS ROG Strix Vega 64 card. For cooling, ASUS is using a similar triple-fan solution that we highly praised in its 1080 Ti Strix model (remarkable for its noise-normalized cooling performance), along with similar heatsink layout.

Learn more here:

Page 1 of 35

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge