UPDATE: We have run new CPU benchmarks for the launch of this game. Please view the Destiny 2 launch CPU benchmarks here.

Our Destiny 2 GPU benchmark was conducted alongside our CPU benchmark, using many of the same learnings from our research for the GPU bench. For GPU testing, we found Destiny 2 to be remarkably consistent between multiplayer and campaign performance, scaling all the way down to a 1050 Ti. This remained true across the campaign, which performed largely identically across all levels, aside from a single level with high geometric complexity and heavy combat. We’ll recap some of that below.

For CPU benchmarking, GN’s Patrick Lathan used this research (starting one hour after the GPU bench began) to begin CPU tests. We ultimately found more test variance between CPUs – particularly at the low-end – when switching between campaign and multiplayer, and so much of this content piece will be dedicated to the research portion behind our Destiny 2 CPU testing. We cannot yet publish this as a definitive “X vs. Y CPU” benchmark, as we don’t have full confidence in the comparative data given Destiny 2’s sometimes nebulous behaviors.

For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:

Since AMD’s high-core-count Ryzen lineup has entered the market, there seems to be an argument in every comment thread about multitasking and which CPUs handle it better. Our clean, controlled benchmarks don’t account for the demands of eighty browser tabs and Spotify running, and so we get constant requests to do in-depth testing on the subject. The general belief is that more threads are better able to handle more processes, a hypothesis that would increasingly favor AMD.

There are a couple reasons we haven’t included tests like these all along: first, “multitasking” means something completely different to every individual, and second, adding uncontrolled variables (like bloatware and network-attached software) makes tests less scientific. Originally, we hoped this article would reveal any hidden advantages that might emerge between CPUs when adding “multitasking” to the mix, but it’s ended up as a thorough explanation of why we don’t do benchmarks like this. We’re using the R3 1200 and G4560 to primarily run these trials.

This is the kind of testing we do behind-the-scenes to build a new test plan, but often don’t publish. This time, however, we’re publishing the trials of finding a multitasking benchmark that works. The point of publishing the trials is to demonstrate why it’s hard to trust “multitasking” tests, and why it’s hard to conduct them in a manner that’s representative of actual differences.

In listening to our community, we’ve learned that a lot of people seem to think Discord is multitasking, or that a Skype window is multitasking. Here’s the thing: If you’re running Discord and a game and you’re seeing an impact to “smoothness,” there’s something seriously wrong with the environment. That’s not even remotely close to enough of a workload to trouble even a G4560. We’re not looking at such a lightweight workload here, and we’re also not looking at the “I keep 100 tabs of Chrome open” scenarios, as that’s wholly unreliable given Chrome’s unpredictable caching and behaviors. What we are looking at is 4K video playback while gaming and bloatware while gaming.

In this piece, the word “multitasking” will be used to describe “running background software while gaming.” The term "bloatware" is being used loosely to easily describe an unclean operating system with several user applications running in the background.

Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.

There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.

Let’s start with prices, then talk architectural requirements.

Before Vega buried Threadripper, we noted interest in conducting a simple A/B comparison between Noctua’s new TR4-sized coldplate (the full-coverage plate) and their older LGA115X-sized coldplate. Clearly, the LGA115X cooler isn’t meant to be used with Threadripper – but it offered a unique opportunity, as the two units are largely the same aside from coldplate coverage. This grants an easy means to run an A/B comparison; although we can’t draw conclusions to all coldplates and coolers, we can at least see what Noctua’s efforts did for them on the Threadripper front.

Noctua’s NH-U14S cooler possesses the same heatpipe count and arrangement, the same (or remarkably similar) fin stack, and the same fan – though we controlled for that by using the same fan for each unit. The only difference is the coldplate, as far as we can tell, and so we’re able to more easily measure performance deltas resultant primarily from the coldplate coverage change. Noctua’s LGA115X version, clearly not for TR4, wouldn’t cover the entire die area of even one module under the HIS. The smaller plate maximally covers about 30% of the die area, just eyeballing it, and doesn’t make direct contact to the rest. This is less coverage than the Asetek CLCs, which at least make contact with the entire TR4 die area, if not the entire IHS. Noctua modified their unit to equip a full-coverage plate as a response, including the unique mounting hardware that TR4 needs.

The LGA115X NH-U14S doesn’t natively mount to Threadripper motherboards. We modded the NH-U14S TR4 cooler’s mounting hardware with a couple of holes, aligning those with the LGA115X holes, then routed screws and nuts through those. A rubber bumper was placed between the mounting hardware and the base of the cooler, used to help ensure even and adequate mounting pressure. We show a short clip of the modding process in our above video.

Vega’s partnership with the Samsung CF791, prior to the card even launching, was met with unrelenting criticism of the monitor’s placement in bundles. Consumer reports on the monitor mention flickering with Ultimate Engine as far back as January, now leveraged as a counter to the CF791’s inclusion in AMD’s bundle. All these consumer reports and complaints largely hinged on Polaris or Fiji products, not Vega (which didn’t exist yet), so we thought it’d be worth a revisit with the bundled card. Besides, if it’s the bundle of the CF791 with Vega that caused the resurgence in flickering concerns, it seems that we should test the CF791 with Vega. That’s the most relevant comparison.

And so we did: Using Vega 56, Vega: FE, and an RX 580 Gaming X (Polaris refresh), we tested Samsung’s CF791 34” UltraWide display, running through permutations of FreeSync. Some such permutations include “Standard Engine” (OSD), “Ultimate Engine” (OSD), and simple on/off toggles (drivers + OSD).

Following questions regarding the alleged expiry of MDF and rebates pertaining to Vega’s launch, AMD responded to GN’s inquiries about pricing allegations with a form statement. We attempted to engage in further conversation, but received replies of limited usefulness as the discussion fell into the inevitable “I’m not allowed to discuss this” territory.

Regardless, if you’ve seen the story, AMD’s official statement on Vega price increases is as follows:

As exciting as it is to see “+242% power offset” in overclocking tools, it’s equally deflating to see that offset only partly work. It does, though, and so we’ve minimally managed to increase our overclocking headroom from the stock +50% offset. The liquid cooler helps, considering we attached a 360mm radiator, two Corsair 120mm maglev fans, a Noctua NF-F12 fan, and a fourth fan for VRM cooling. Individual heatsinks were also added to hotter VRM components, leaving two sets unsinked, but cooled heavily with direct airflow.

This mod is our coolest-running hybrid mod yet, with large thanks to the 360mm radiator. There’s reason for that, too – we’re now able to push peak power of about 370-380W through the card, up from our previous limitation of ~308W. We were gunning for 400W, but it’s just not happening right now. We’re still working on BIOS mods and powerplay table mods.

Following the initial rumors stemming from an Overclockers.co.uk post about Vega price soon changing, multiple AIB partners reached out to GamersNexus – and vice versa – to discuss the truth of the content. The post by Gibbo of Overclockers suggested that launch rebates and MDF would be expiring from AMD for Vega, which would drive pricing upward as retailers scramble to make a profit on the new GPU. Launch pricing of Vega 64 was supposed to be $500, but quickly shot to $600 USD in the wake of immediate inventory selling out. This is also why the packs exist – it enables AMD to “lower” the pricing of Vega by making return on other components.

In speaking with different sources from different companies that work with AMD, GamersNexus learned that “Gibbo is right” regarding the AMD rebate expiry and subsequent price jump. AMD purportedly provided the top retailers and etailers with a $499 price on Vega 64, coupling sale of the card with a rebate to reduce spend by retailers, and therefore use leverage to force the lower price. The $100 rebate from AMD is already expiring, hence the price jump by retailers who need return. Rebates were included as a means to encourage retailers to try to sell at the lower $499 price. With those expiring, leverage is gone and retailers/etailers return to their own price structure, as margins are exceptionally low on this product.

Tearing open the RX Vega 56 card revealed more of what we expected: A Vega Frontier Edition card, which is the same as Vega 64, which is the same as Vega 56. It seems as if AMD took the same PCB & VRM run and increased volume to apply to all these cards, thereby ensuring MOQ is met and theoretically lowering cost for all devices combined. That said, the price also increases in unnecessary ways for the likes of Vega 56, which has one of the most overkill VRMs a card of its ilk possibly could -- especially given the native current and power constraints enforced by BIOS. That said, we're working on power tables mods to bypass these constraints, despite the alleged Secure Boot compliance by AMD.

We posted a tear-down of the card earlier today, though it is much the same as the Vega: Frontier Edition -- and by "much the same," we mean "exactly the same." Though, to be fair, V56 does lack the TR6 & TR5 screws of FE.

Here's the tear-down:

“Indecision” isn’t something we’ve ever titled a review, or felt in general about hardware. The thing is, though, that Vega is launching in the midst of a market which behaves completely unpredictably. We review products as a value proposition, looking at performance to dollars and coming to some sort of unwavering conclusion. Turns out, that’s sort of hard to do when the price is “who knows” and availability is uncertain. Mining does all this, of course; AMD’s launching a card in the middle of boosted demand, and so prices won’t stick for long. The question is whether the inevitable price hike will match or exceed the price of competing cards. NVidia's GTX 1070 should be selling below $400 (a few months ago, it did), the GTX 1080 should be ~$500, and the RX Vega 56 should be $400.

Conclusiveness would be easier with at least one unchanging value.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge