Buildzoid returns with an analysis of the Colorful GTX 1070 Ti Vulcan X PCB and VRM, including some brief discussion on shorting the shunts of the new 1070 Ti card. Colorful is attempting to get into the Western market, and the GTX 1070 Ti launch will be their maiden voyage in that attempt. We received the Vulcan X card first -- for which we presently have no MSRP -- and tore it down a few days ago. Our PCB analysis, embedded below, takes an XOCer's look at the VRM quality and implementation.

Learn more below:

NVIDIA just posted its 388.10 drivers for Wolfenstein II, building on the earlier-launched 388.0 driver update for Destiny II. Aside from hotfixes, the driver package does not change any core functionality or performance of nVidia GTX cards. This is similar to AMD's latest hotfix for its Vega cards on Destiny II: Only download and install 388.10 if you are actively running into issues with the game at hand.

On its forums, an nVidia representative posted:

Along with the announcement of the nVidia GTX 1070 Ti, AMD officially announced its Raven Ridge and Ryzen Mobile products, shortly after a revenue-positive quarterly earnings report. This week has been a busy one for hardware news, as these announcements were accompanied by news of the PCIe 4.0 specification v1.0 document finalization, as PCI-SIG now ramps the PCIe 5.0 spec into design.

Show notes are listed below, with the video here:

NVidia’s much-rumored GTX 1070 Ti will launch on November 2, 2017, with initial information disseminated today. The 1070 Ti uses a GP104-300 GPU, slotted between the GP104-400 and GP104-200 of the 1080 and 1070 (respectively), and therefore uses the same silicon as we’ve seen before. This is likely the final Pascal launch before leading into Volta, and is seemingly the response to AMD’s Vega 56 challenger of the GTX 1070 non-Ti.

The 1070 Ti is slightly cut-down from the 1080, the former of which runs 19 SMs for 2432 CUDA cores (at 128 shaders per SM), with the latter running 20 SMs. The result is what will likely amount to clock differences, primarily, as the 1070 Ti operates 1607/1683MHz for its clock speeds, and AIB partners are not permitted to offer pre-overclocked versions. For all intents and purposes, outside of the usual cooling, VRM, and silicon quality differences (random, at best), all AIB partner cards will perform identically in out-of-box states. Silicon quality will amount to the biggest differences, with cooler quality – anything with an exceptionally bad cooler, primarily – differentiating the rest.

As we understand it now, users will be able to manually overclock the 1070 Ti with software. See the specs below:

As stated in the video intro, this benchmark contains some cool data that was exciting to work with. We don’t normally accumulate enough data to run historical trend plots across various driver or game revisions, but our initial Destiny 2 pre-launch benchmarks enabled us to compare that data against the game’s official launch. Bridging our pre-launch beta benchmarks with similar testing methods for the Destiny 2 PC launch, including driver changes, makes it easier to analyze the deviation between CPU, driver, and game code optimizations.

Recapping the previous tests, we already ran a wide suite of Destiny 2 benchmarks that included performance scaling tests in PvP multiplayer, campaign/co-op multiplayer, and various levels/worlds in the game. Find some of that content below:

NOTE: Our Destiny 2 CPU benchmark is now live.

Some of our original graphics optimization work also carried forward, allowing us to better pinpoint Depth of Field on Highest as one of the major culprits to AMD’s performance. This has changed somewhat with launch, as you’ll find below.

We’re sticking with FXAA for testing. Bungie ended up removing MSAA entirely, as the technique has been buggy since the beta, and left only SMAA and FXAA in its place.

Following-up our tear-down of the ASUS ROG Strix Vega 64 graphics card, Buildzoid of Actually Hardcore Overclocking now visits the PCB for an in-depth VRM & PCB analysis. The big question was whether ASUS could reasonably outdo AMD's reference design, which is shockingly good for a card with such a bad cooler. "Reasonably," in this sentence, means "within reasonable cost" -- there's not much price-to-performance headroom with Vega, so any custom cards will have to keep MSRP as low as possible while still iterating on the cooler.

The PCB & VRM analysis is below, but we're still on hold for performance testing. As of right now, we are waiting on ASUS to finalize its VBIOS for best compatibility with AMD's drivers. It seems that there is some more discussion between AIB partners and AMD for this generation, which is introducing a bit of latency on launches. For now, here's the PCB analysis -- timestamps are on the left-side of the video:

We’ve already sent off the information contained in this video to Buildzoid, who has produced a PCB & VRM analysis of the ROG Strix Vega 64 by ASUS. That content will go live within the next few days, and will talk about whether the Strix card manages to outmatch AMD’s already-excellent reference PCB design for Vega. Stay tuned for that.

In the meantime, the below is a discussion of the cooling solution and disassembly process for the ASUS ROG Strix Vega 64 card. For cooling, ASUS is using a similar triple-fan solution that we highly praised in its 1080 Ti Strix model (remarkable for its noise-normalized cooling performance), along with similar heatsink layout.

Learn more here:

We’re winding-down coverage of Vega, at this point, but we’ve got a couple more curiosities to explore. This content piece looks at a mix of clock scalability for Vega across a few key clocks (for core and HBM2), and hopes to constrain for a CU difference, to some extent. We obviously can’t fully control down to the shader level (as CUs carry more than just shaders), but we can get close to it. Note that the video content does generally refer to the V56 & V64 difference as one of shaders, but more than shaders are contained in the CUs.

In our initial AMD shader comparison between Vega 56 and Vega 64, we saw nearly identical performance between the cards when clock-matched to roughly 1580~1590MHz core and 945MHz HBM2. We’re now exploring performance across a range of frequency settings, from ~1400MHz core to ~1660MHz core, and from 800MHz HBM2 to ~1050MHz HBM2.

This content piece was originally written and filmed about ten days ago, ready to go live, but we then decided to put a hold on the content and update it. Against initial plans, we ended up flashing V64 VBIOS onto the V56 to give us more voltage headroom for HBM2 clocks, allowing us to get up to 1020MHz easily on V56. There might be room in there for a bit more of an OC, but 1020MHz proved stable on both our V64 and V56 cards, making it easy to test the two comparatively.

AMD’s architecture hasn’t generally shown a large gain from increasing CU count between top-tier and second-to-top cards. The Fury and Fury X, for instance, could be made to match with an overclock on the lower-tiered card. Additional gains on the higher-tiered card often amount from the increased power limit and clock, not from a straight shader increase. We’re putting that knowledge to the test on Vega architecture, equalizing the Vega 56 & Vega 64 clocks (and 945MHz HBM2 clocks) to determine how much of a difference emerges from the 4096 shaders on V64 to 3584 shaders on V56. Purely counting shaders, that’s a 14% increase to V64, but like most performance metrics, that won’t result in a linear performance increase.

We were able to crush Vega 64’s performance with our heavily modded Vega 56 card, using powerplay tables and liquid to jump to 1742MHz clock speeds. That's with modding, though, and isn't out-of-box performance -- it also doesn't give us any indication as to shader differences. Going less crazy about overclocking and limiting clocks to matched speeds, we can reveal the shader count difference.

It’s illegal to outright fix prices of products. Manufacturers have varying levels of sway when establishing cost to distributor partners and suggested retail prices, acted on much lower in the chain, and have to produce supply based on expectations of demand. We’ve previously talked about how MDF or other exchanges can be used to inspire retailers to work within some guidelines, but there are limits to the financial and legal extension of those means.

This context in mind, it makes sense that the undertone of discussion pertaining to video card prices – not just AMD’s, but nVidia’s – plants much of the blame squarely on retailers. There’s only so much that AMD and nVidia can do to drive prices at least somewhat close to MSRP. One of those actions is to put out more supply to sate demand but, as we saw during the last mining boom & bust (with emergent ASIC miners), there’s reason for manufacturers to remain hesitant of a major supply commitment. If AMD or nVidia were to place a large order with their fabs, there’d better be some level of confidence that the product will sell. Factory-to-shelf turn-around is a period of months, weeks of which can be shipping (unless opting for prohibitively expensive air freight).  A period of months is a wide window. We’ve seen mining markets “crash” and recover in a period of days, or hours, with oft unpredictable frequency and intensity. That’d explain why AMD might be hesitant to issue large orders of older product, like the RX 500 series, to try and meet demand.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge