AMD’s architecture hasn’t generally shown a large gain from increasing CU count between top-tier and second-to-top cards. The Fury and Fury X, for instance, could be made to match with an overclock on the lower-tiered card. Additional gains on the higher-tiered card often amount from the increased power limit and clock, not from a straight shader increase. We’re putting that knowledge to the test on Vega architecture, equalizing the Vega 56 & Vega 64 clocks (and 945MHz HBM2 clocks) to determine how much of a difference emerges from the 4096 shaders on V64 to 3584 shaders on V56. Purely counting shaders, that’s a 14% increase to V64, but like most performance metrics, that won’t result in a linear performance increase.

We were able to crush Vega 64’s performance with our heavily modded Vega 56 card, using powerplay tables and liquid to jump to 1742MHz clock speeds. That's with modding, though, and isn't out-of-box performance -- it also doesn't give us any indication as to shader differences. Going less crazy about overclocking and limiting clocks to matched speeds, we can reveal the shader count difference.

It’s illegal to outright fix prices of products. Manufacturers have varying levels of sway when establishing cost to distributor partners and suggested retail prices, acted on much lower in the chain, and have to produce supply based on expectations of demand. We’ve previously talked about how MDF or other exchanges can be used to inspire retailers to work within some guidelines, but there are limits to the financial and legal extension of those means.

This context in mind, it makes sense that the undertone of discussion pertaining to video card prices – not just AMD’s, but nVidia’s – plants much of the blame squarely on retailers. There’s only so much that AMD and nVidia can do to drive prices at least somewhat close to MSRP. One of those actions is to put out more supply to sate demand but, as we saw during the last mining boom & bust (with emergent ASIC miners), there’s reason for manufacturers to remain hesitant of a major supply commitment. If AMD or nVidia were to place a large order with their fabs, there’d better be some level of confidence that the product will sell. Factory-to-shelf turn-around is a period of months, weeks of which can be shipping (unless opting for prohibitively expensive air freight).  A period of months is a wide window. We’ve seen mining markets “crash” and recover in a period of days, or hours, with oft unpredictable frequency and intensity. That’d explain why AMD might be hesitant to issue large orders of older product, like the RX 500 series, to try and meet demand.

Everyone talks game about how they don’t care about power consumption. We took that comment to the extreme, using a registry hack to give Vega 56 enough extra power to kill the card, if we wanted, and a Floe 360mm CLC to keep temperatures low enough that GPU diode reporting inaccuracies emerge. “I don’t care about power consumption, I just want performance” is now met with that – 100% more power and an overclock to 1742MHz core. We've got room to do 200% power, but things would start popping at that point. The Vega 56 Hybrid mod is our most modded version of the Hybrid series to date, and leverages powerplay table registry changes to provide that additional power headroom. This is an alternative to BIOS flashing, which is limited to signed drivers (like V64 on V56, though we had issues flashing V64L onto V56). Last we attempted it, a modified BIOS did not work. Powerplay tables do, though, and mean that we can modify power target to surpass V56’s artificial power limitation.

The limitation on power provisioned to the V56 core is, we believe, fully to prevent V56 from too easily outmatching V64 in performance. The card’s BIOS won’t allow greater than 300-308W down the PCIe cables natively, even though official BIOS versions for V64 cards can support 350~360W. The VRM itself easily sustains 360W, and we’ve tested it as handling 406W without a FET popping. 400W is probably pushing what’s reasonable, but to limit V56 to ~300W, when an additional 60W is fully within the capabilities of the VRM & GPU, is a means to cap V56 performance to a point of not competing with V64.

We fixed that.

AMD’s CU scaling has never been that impacting to performance – clock speed closes most gaps with AMD hardware. Even without the extra shaders of V64, we can outperform V64’s stock performance, and we’ll soon find out how we do versus V64’s overclocked performance. That’ll have to wait until after PAX, but it’s something we’re hoping to further study.

The Destiny 2 beta’s arrival on PC provides a new benchmarking opportunity for GPUs and CPUs, and will allow us to plot performance uplift once the final game ships. Aside from being a popular beta, we also want to know if Bungie, AMD, and nVidia work to further improve performance in the final stretch of time prior to the official October 24 launch date. For now, we’re conducting an exploratory benchmark of multiplayer versus campaign test patterns for Destiny 2, quality settings, and multiple resolutions.

A few notes before beginning: This is beta, first off, and everything is subject to change. We’re ultimately testing this as it pertains to the beta, but using that experience to learn more about how Destiny 2 behaves so that we’re not surprised on its release. Some of this testing is to learn about settings impact to performance (including some unique behavior between “High” and “Highest”), multiplayer vs. campaign performance, and level performance. Note also that drivers will iterate and, although nVidia and AMD both recommended their respective drivers for this test (385.41, 17.8.2), likely change for final release. AMD in particular is in need of a more Destiny-specific driver, based on our testing, so keep in mind that performance metrics are in flux for the final launch.

Note also: Our Destiny 2 CPU benchmark will be up not long after this content piece. Keep an eye out for that one.

Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.

There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.

Let’s start with prices, then talk architectural requirements.

Jon Peddie Research reports that the add-in board GPU market has increased 30.9% over last quarter and 34.9% year-to-year, largely thanks to the recent cryptocurrency mining craze.

Regardless of the exact numbers, it’s obvious to anyone that’s checked graphics card prices recently that something unusual is happening. JPR states that Q2 usually sees a “significant drop” in the market (average -9.8%), with the most action happening around the holiday season. This Q2, the market has increased for the first time in nine years. This is despite general PC market decline as demand for the industry’s bread-and-butter general purpose (non-gaming) PCs has dropped.

Vega’s partnership with the Samsung CF791, prior to the card even launching, was met with unrelenting criticism of the monitor’s placement in bundles. Consumer reports on the monitor mention flickering with Ultimate Engine as far back as January, now leveraged as a counter to the CF791’s inclusion in AMD’s bundle. All these consumer reports and complaints largely hinged on Polaris or Fiji products, not Vega (which didn’t exist yet), so we thought it’d be worth a revisit with the bundled card. Besides, if it’s the bundle of the CF791 with Vega that caused the resurgence in flickering concerns, it seems that we should test the CF791 with Vega. That’s the most relevant comparison.

And so we did: Using Vega 56, Vega: FE, and an RX 580 Gaming X (Polaris refresh), we tested Samsung’s CF791 34” UltraWide display, running through permutations of FreeSync. Some such permutations include “Standard Engine” (OSD), “Ultimate Engine” (OSD), and simple on/off toggles (drivers + OSD).

This is just a quick PSA.

We shot an off-the-cuff video about software misreporting Vega’s frequency, to the extent that a “1980MHz overclock” is possible under the misreported conditions. The entire point of the video was to bring awareness to a bug in either software or drivers – not to point blame at AMD – explicitly to ensure consumers understand that the numbers may be inaccurate. Some reviews even cited overclocks of “1980MHz,” but overlooked the fact that scaling ceases around the threshold where the reporting bugs out.

Following questions regarding the alleged expiry of MDF and rebates pertaining to Vega’s launch, AMD responded to GN’s inquiries about pricing allegations with a form statement. We attempted to engage in further conversation, but received replies of limited usefulness as the discussion fell into the inevitable “I’m not allowed to discuss this” territory.

Regardless, if you’ve seen the story, AMD’s official statement on Vega price increases is as follows:

As exciting as it is to see “+242% power offset” in overclocking tools, it’s equally deflating to see that offset only partly work. It does, though, and so we’ve minimally managed to increase our overclocking headroom from the stock +50% offset. The liquid cooler helps, considering we attached a 360mm radiator, two Corsair 120mm maglev fans, a Noctua NF-F12 fan, and a fourth fan for VRM cooling. Individual heatsinks were also added to hotter VRM components, leaving two sets unsinked, but cooled heavily with direct airflow.

This mod is our coolest-running hybrid mod yet, with large thanks to the 360mm radiator. There’s reason for that, too – we’re now able to push peak power of about 370-380W through the card, up from our previous limitation of ~308W. We were gunning for 400W, but it’s just not happening right now. We’re still working on BIOS mods and powerplay table mods.

Page 1 of 34

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge