AMD’s architecture hasn’t generally shown a large gain from increasing CU count between top-tier and second-to-top cards. The Fury and Fury X, for instance, could be made to match with an overclock on the lower-tiered card. Additional gains on the higher-tiered card often amount from the increased power limit and clock, not from a straight shader increase. We’re putting that knowledge to the test on Vega architecture, equalizing the Vega 56 & Vega 64 clocks (and 945MHz HBM2 clocks) to determine how much of a difference emerges from the 4096 shaders on V64 to 3584 shaders on V56. Purely counting shaders, that’s a 14% increase to V64, but like most performance metrics, that won’t result in a linear performance increase.

We were able to crush Vega 64’s performance with our heavily modded Vega 56 card, using powerplay tables and liquid to jump to 1742MHz clock speeds. That's with modding, though, and isn't out-of-box performance -- it also doesn't give us any indication as to shader differences. Going less crazy about overclocking and limiting clocks to matched speeds, we can reveal the shader count difference.

It’s illegal to outright fix prices of products. Manufacturers have varying levels of sway when establishing cost to distributor partners and suggested retail prices, acted on much lower in the chain, and have to produce supply based on expectations of demand. We’ve previously talked about how MDF or other exchanges can be used to inspire retailers to work within some guidelines, but there are limits to the financial and legal extension of those means.

This context in mind, it makes sense that the undertone of discussion pertaining to video card prices – not just AMD’s, but nVidia’s – plants much of the blame squarely on retailers. There’s only so much that AMD and nVidia can do to drive prices at least somewhat close to MSRP. One of those actions is to put out more supply to sate demand but, as we saw during the last mining boom & bust (with emergent ASIC miners), there’s reason for manufacturers to remain hesitant of a major supply commitment. If AMD or nVidia were to place a large order with their fabs, there’d better be some level of confidence that the product will sell. Factory-to-shelf turn-around is a period of months, weeks of which can be shipping (unless opting for prohibitively expensive air freight).  A period of months is a wide window. We’ve seen mining markets “crash” and recover in a period of days, or hours, with oft unpredictable frequency and intensity. That’d explain why AMD might be hesitant to issue large orders of older product, like the RX 500 series, to try and meet demand.

While traveling, the major story that unfolded – and then folded – pertained to the alleged unlocking of Vega 56 shaders, permitting the cards to turn into a “Vega 58” or “Vega 57,” depending. This ultimately was due to a GPU-Z reporting bug, and users claiming increases in performance hadn’t normalized for the clock change or higher power budget. Still, the BIOS flash will modify the DPM tables to adjust for higher clocks and permit greater HBM2 voltage to the memory. Of these changes, the latter is the only real, relevant change – clocks can be manually increased on V56, and the core voltage remains the same after a flash. Powerplay tables can be used to bypass BIOS power limits on V56, though a flash to V64 BIOS permits higher power budget.

Even with all this, it’s still impossible (presently) to flash a modified, custom BIOS onto Vega. We tried this upon review of Vega 56, finding that the card was locked-down to prevent modding. This uses an on-die security coprocessor, relegating our efforts to powerplay tables. Those powerplay tables did ultimately prove successful, as we recently published.

Everyone talks game about how they don’t care about power consumption. We took that comment to the extreme, using a registry hack to give Vega 56 enough extra power to kill the card, if we wanted, and a Floe 360mm CLC to keep temperatures low enough that GPU diode reporting inaccuracies emerge. “I don’t care about power consumption, I just want performance” is now met with that – 100% more power and an overclock to 1742MHz core. We've got room to do 200% power, but things would start popping at that point. The Vega 56 Hybrid mod is our most modded version of the Hybrid series to date, and leverages powerplay table registry changes to provide that additional power headroom. This is an alternative to BIOS flashing, which is limited to signed drivers (like V64 on V56, though we had issues flashing V64L onto V56). Last we attempted it, a modified BIOS did not work. Powerplay tables do, though, and mean that we can modify power target to surpass V56’s artificial power limitation.

The limitation on power provisioned to the V56 core is, we believe, fully to prevent V56 from too easily outmatching V64 in performance. The card’s BIOS won’t allow greater than 300-308W down the PCIe cables natively, even though official BIOS versions for V64 cards can support 350~360W. The VRM itself easily sustains 360W, and we’ve tested it as handling 406W without a FET popping. 400W is probably pushing what’s reasonable, but to limit V56 to ~300W, when an additional 60W is fully within the capabilities of the VRM & GPU, is a means to cap V56 performance to a point of not competing with V64.

We fixed that.

AMD’s CU scaling has never been that impacting to performance – clock speed closes most gaps with AMD hardware. Even without the extra shaders of V64, we can outperform V64’s stock performance, and we’ll soon find out how we do versus V64’s overclocked performance. That’ll have to wait until after PAX, but it’s something we’re hoping to further study.

Our Destiny 2 GPU benchmark was conducted alongside our CPU benchmark, using many of the same learnings from our research for the GPU bench. For GPU testing, we found Destiny 2 to be remarkably consistent between multiplayer and campaign performance, scaling all the way down to a 1050 Ti. This remained true across the campaign, which performed largely identically across all levels, aside from a single level with high geometric complexity and heavy combat. We’ll recap some of that below.

For CPU benchmarking, GN’s Patrick Lathan used this research (starting one hour after the GPU bench began) to begin CPU tests. We ultimately found more test variance between CPUs – particularly at the low-end – when switching between campaign and multiplayer, and so much of this content piece will be dedicated to the research portion behind our Destiny 2 CPU testing. We cannot yet publish this as a definitive “X vs. Y CPU” benchmark, as we don’t have full confidence in the comparative data given Destiny 2’s sometimes nebulous behaviors.

For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:

Since AMD’s high-core-count Ryzen lineup has entered the market, there seems to be an argument in every comment thread about multitasking and which CPUs handle it better. Our clean, controlled benchmarks don’t account for the demands of eighty browser tabs and Spotify running, and so we get constant requests to do in-depth testing on the subject. The general belief is that more threads are better able to handle more processes, a hypothesis that would increasingly favor AMD.

There are a couple reasons we haven’t included tests like these all along: first, “multitasking” means something completely different to every individual, and second, adding uncontrolled variables (like bloatware and network-attached software) makes tests less scientific. Originally, we hoped this article would reveal any hidden advantages that might emerge between CPUs when adding “multitasking” to the mix, but it’s ended up as a thorough explanation of why we don’t do benchmarks like this. We’re using the R3 1200 and G4560 to primarily run these trials.

This is the kind of testing we do behind-the-scenes to build a new test plan, but often don’t publish. This time, however, we’re publishing the trials of finding a multitasking benchmark that works. The point of publishing the trials is to demonstrate why it’s hard to trust “multitasking” tests, and why it’s hard to conduct them in a manner that’s representative of actual differences.

In listening to our community, we’ve learned that a lot of people seem to think Discord is multitasking, or that a Skype window is multitasking. Here’s the thing: If you’re running Discord and a game and you’re seeing an impact to “smoothness,” there’s something seriously wrong with the environment. That’s not even remotely close to enough of a workload to trouble even a G4560. We’re not looking at such a lightweight workload here, and we’re also not looking at the “I keep 100 tabs of Chrome open” scenarios, as that’s wholly unreliable given Chrome’s unpredictable caching and behaviors. What we are looking at is 4K video playback while gaming and bloatware while gaming.

In this piece, the word “multitasking” will be used to describe “running background software while gaming.” The term "bloatware" is being used loosely to easily describe an unclean operating system with several user applications running in the background.

Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.

There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.

Let’s start with prices, then talk architectural requirements.

Before Vega buried Threadripper, we noted interest in conducting a simple A/B comparison between Noctua’s new TR4-sized coldplate (the full-coverage plate) and their older LGA115X-sized coldplate. Clearly, the LGA115X cooler isn’t meant to be used with Threadripper – but it offered a unique opportunity, as the two units are largely the same aside from coldplate coverage. This grants an easy means to run an A/B comparison; although we can’t draw conclusions to all coldplates and coolers, we can at least see what Noctua’s efforts did for them on the Threadripper front.

Noctua’s NH-U14S cooler possesses the same heatpipe count and arrangement, the same (or remarkably similar) fin stack, and the same fan – though we controlled for that by using the same fan for each unit. The only difference is the coldplate, as far as we can tell, and so we’re able to more easily measure performance deltas resultant primarily from the coldplate coverage change. Noctua’s LGA115X version, clearly not for TR4, wouldn’t cover the entire die area of even one module under the HIS. The smaller plate maximally covers about 30% of the die area, just eyeballing it, and doesn’t make direct contact to the rest. This is less coverage than the Asetek CLCs, which at least make contact with the entire TR4 die area, if not the entire IHS. Noctua modified their unit to equip a full-coverage plate as a response, including the unique mounting hardware that TR4 needs.

The LGA115X NH-U14S doesn’t natively mount to Threadripper motherboards. We modded the NH-U14S TR4 cooler’s mounting hardware with a couple of holes, aligning those with the LGA115X holes, then routed screws and nuts through those. A rubber bumper was placed between the mounting hardware and the base of the cooler, used to help ensure even and adequate mounting pressure. We show a short clip of the modding process in our above video.

Vega’s partnership with the Samsung CF791, prior to the card even launching, was met with unrelenting criticism of the monitor’s placement in bundles. Consumer reports on the monitor mention flickering with Ultimate Engine as far back as January, now leveraged as a counter to the CF791’s inclusion in AMD’s bundle. All these consumer reports and complaints largely hinged on Polaris or Fiji products, not Vega (which didn’t exist yet), so we thought it’d be worth a revisit with the bundled card. Besides, if it’s the bundle of the CF791 with Vega that caused the resurgence in flickering concerns, it seems that we should test the CF791 with Vega. That’s the most relevant comparison.

And so we did: Using Vega 56, Vega: FE, and an RX 580 Gaming X (Polaris refresh), we tested Samsung’s CF791 34” UltraWide display, running through permutations of FreeSync. Some such permutations include “Standard Engine” (OSD), “Ultimate Engine” (OSD), and simple on/off toggles (drivers + OSD).

Following questions regarding the alleged expiry of MDF and rebates pertaining to Vega’s launch, AMD responded to GN’s inquiries about pricing allegations with a form statement. We attempted to engage in further conversation, but received replies of limited usefulness as the discussion fell into the inevitable “I’m not allowed to discuss this” territory.

Regardless, if you’ve seen the story, AMD’s official statement on Vega price increases is as follows:

Page 1 of 31

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge