Steve Burke

Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

Ask GN serves as a consistent format in our video series, now 57 episodes strong. We are still filming at a pace of roughly one Ask GN episode every 1-2 weeks, so if you’ve got questions, be sure to submit them in the YT comments section.

This week’s episode gives a brief break from the deeper overclocking, undervolting, and benchmarking topics of late. We briefly visit Pascal temperature response observations, a user’s chipped GPU (and our own tech battle scars), and talk monitor overclocking. The article referenced during the monitor OC section can be found here.

Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.

There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.

Let’s start with prices, then talk architectural requirements.

Before Vega buried Threadripper, we noted interest in conducting a simple A/B comparison between Noctua’s new TR4-sized coldplate (the full-coverage plate) and their older LGA115X-sized coldplate. Clearly, the LGA115X cooler isn’t meant to be used with Threadripper – but it offered a unique opportunity, as the two units are largely the same aside from coldplate coverage. This grants an easy means to run an A/B comparison; although we can’t draw conclusions to all coldplates and coolers, we can at least see what Noctua’s efforts did for them on the Threadripper front.

Noctua’s NH-U14S cooler possesses the same heatpipe count and arrangement, the same (or remarkably similar) fin stack, and the same fan – though we controlled for that by using the same fan for each unit. The only difference is the coldplate, as far as we can tell, and so we’re able to more easily measure performance deltas resultant primarily from the coldplate coverage change. Noctua’s LGA115X version, clearly not for TR4, wouldn’t cover the entire die area of even one module under the HIS. The smaller plate maximally covers about 30% of the die area, just eyeballing it, and doesn’t make direct contact to the rest. This is less coverage than the Asetek CLCs, which at least make contact with the entire TR4 die area, if not the entire IHS. Noctua modified their unit to equip a full-coverage plate as a response, including the unique mounting hardware that TR4 needs.

The LGA115X NH-U14S doesn’t natively mount to Threadripper motherboards. We modded the NH-U14S TR4 cooler’s mounting hardware with a couple of holes, aligning those with the LGA115X holes, then routed screws and nuts through those. A rubber bumper was placed between the mounting hardware and the base of the cooler, used to help ensure even and adequate mounting pressure. We show a short clip of the modding process in our above video.

Vega’s partnership with the Samsung CF791, prior to the card even launching, was met with unrelenting criticism of the monitor’s placement in bundles. Consumer reports on the monitor mention flickering with Ultimate Engine as far back as January, now leveraged as a counter to the CF791’s inclusion in AMD’s bundle. All these consumer reports and complaints largely hinged on Polaris or Fiji products, not Vega (which didn’t exist yet), so we thought it’d be worth a revisit with the bundled card. Besides, if it’s the bundle of the CF791 with Vega that caused the resurgence in flickering concerns, it seems that we should test the CF791 with Vega. That’s the most relevant comparison.

And so we did: Using Vega 56, Vega: FE, and an RX 580 Gaming X (Polaris refresh), we tested Samsung’s CF791 34” UltraWide display, running through permutations of FreeSync. Some such permutations include “Standard Engine” (OSD), “Ultimate Engine” (OSD), and simple on/off toggles (drivers + OSD).

This is just a quick PSA.

We shot an off-the-cuff video about software misreporting Vega’s frequency, to the extent that a “1980MHz overclock” is possible under the misreported conditions. The entire point of the video was to bring awareness to a bug in either software or drivers – not to point blame at AMD – explicitly to ensure consumers understand that the numbers may be inaccurate. Some reviews even cited overclocks of “1980MHz,” but overlooked the fact that scaling ceases around the threshold where the reporting bugs out.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge