Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.

There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.

Let’s start with prices, then talk architectural requirements.

Where video cards have had to deal with mining cost, memory and SSD products have had to deal with NAND supply and cost. Looks like video cards may soon join the party, as – according to DigiTimes and sources familiar with SK Hynix & Samsung supply – quotes in August increased 30.8% for manufacturers. That’s a jump from $6.50 in July to $8.50 in August.

It sounds as if this stems from a supply-side deficit, based on initial reporting, and that’d indicate that products with a higher count of memory modules should see a bigger price hike. From what we’ve read, mobile devices (like gaming notebooks) may be more immediately impacted, with discrete cards facing indeterminate impact at this time.

We’ve been writing about the latest memory and Flash price increases for a bit now – and this does seem to happen every few years – but relief remains distant. The memory supply is limited for a few reasons right now, including new R&D processes by the big suppliers (Samsung, Toshiba, SK Hynix, Micron) as some of the suppliers attempt to move toward new process technology. More immediately and critical, the phone industry’s launch cycle is on the horizon, and that means drastically increased memory sales to phone vendors. Supply is finite – it has to come out of inventory somewhere, and that tends to be components. As enthusiasts, that’s where we see the increased prices come into play.

Professional overclocker Toppc recently set another world record for DDR4 SDRAM frequency. Using a set of G.SKILL DDR4 sticks (an unidentified kit from the Trident Z RGB line) bestriding an MSI X299 Gaming Pro Carbon AC motherboard, Toppc was able to achieve a 5.5 GHz DDR4 frequency—approximately a 500 MHz improvement over his record from last year.

Toppc’s new record is verified by HWBot, accompanied by a screenshot of CPU-Z and Toppc’s extreme cooling setup, which involved LN2. Although an exact temperature was not provided, and details on the aforementioned G.SKILL kit are scant, we do know that the modules used Samsung 8GB ICs. Based on the limited information, we can infer or postulate that this is probably a new product from G.SKILL, as they announced new memory kits at Computex.

We recently covered Intel’s DC P4800X data center drive, with takes on the technology from two editors in video and article form. Those content pieces served as a technology overview for 3D Xpoint and Intel Optane (and should be referenced as primer material), but both indicated a distinct lack of any consumer-focused launch for the new half-memory, half-storage amalgam.

Today, we’re back to discuss Intel’s Optane Memory modules, which will ship April 24 in the form of M.2 sticks.

As Intel’s platform for 3D Xpoint (Micron also has one: QuantX), Optane will be deployed on standardized interfaces like PCI-e AICs, M.2, and eventually DIMM form factors. This means no special “Optane port,” so to speak, and should make adoption at least somewhat more likely. There’s still a challenging road ahead for Intel, of course, as Optane has big goals to somewhat unify memory and storage by creating a device with storage-like capacities and memory-like latencies. For more of a technology overview, check out Patrick Stone’s article on the DC P4800X.

The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?

In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.

While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.

At the tail-end of a one-day trip across the country, this episode of Ask GN tides us over until our weekend burst of further content production. We’re currently working on turning around a few case reviews, some game benchmarks, and implementing new thermal calibrators and high-end equipment.

In the meantime, this episode addresses questions involving “doubled” DRAM prices, delidding plans for the i7-7700K, contact between a heatsink and the back of a video card, and a few other topics. Check back posthaste as we’ll ramp into publication of our i5-7600K review within the next day.

Video below, timestamps below that:

As predicted, DRAM-dependent components continue to grow more expensive as demand outpaces supply. Nanya Technology president Pei-Ing Lee confirmed that their DRAM’s average price will increase in the first and second quarter of 2017.

When we published our “Why Are RAM Prices So High” article in 2014, DRAM was transitioning to 25nm wafers—and now it’s transitioning again, this time to 20nm. Prices in the second half of 2017 are expected to stabilize, but depend largely on how quickly manufacturers gear up for the move to smaller dies—Nanya Technology will be simultaneously increasing 20nm production while cutting down on 30nm going into 2018.

We recently reported on G.SKILL’s announcement of the new Trident Z RGB series of memory. G.SKILL has now announced their high-end Trident Z DDR4 DIMMs designed for Kaby Lake CPUs and Z270 chipset motherboards.

For the launch of Kaby Lake and the Z270 chipset, G.SKILL will offer various dual-channel kits in 16GB, 32GB, and 64GB options. The modules themselves will only come in 8 or 16GB densities; a rejoinder of sorts on behalf of G.SKILL, pertaining to their claims of  mainstream popularity for 16GB and 32GB memory kits. As seen below, kits clocked at 4000MHz and beyond will only be offered in configurations of 8GB modules.

As the pre-CES hardware news keeps pouring in, HyperX has announced new products today that will further their peripherals and components aimed at the gaming market. HyperX has introduced two new Alloy keyboards, a Pulsefire Gaming Mouse, a new Cloud Revolver S Headset, and HyperX Predator DDR4 RGB LED Memory.

We will be visiting HyperX and Kingston this week, and hope to have more in-depth, on-site coverage from the show floor. For now, we’ve got the basic specs and introductory information for each new peripheral and memory kit.

Starting with the audio gear, HyperX has announced the new headset that will be showcased at CES 2017 -- the Cloud Revolver S. The new gaming headset will feature plug-and-play Dolby 7.1 virtual surround sound via a connected USB dongle. HyperX claims no additional software or audiobox will be needed to get the Dolby 7.1 surround sound functional. This isn’t the first time we’ve seen an implementation of 7.1 surround in this fashion -- Turtle Beach and Plantronics have done this for ages -- but it’s the first major noise HyperX is making about Dolby Surround.

More unique to the unit, the HyperX Cloud Revolver S will have a condenser microphone with a bi-directional pattern; the condenser mic, although we’ll have to test it, could be promising for streamers and video casters who’d rather not use standalone input hardware. The HyperX Cloud Revolver S will be available February of 2017 for $150 USD.

Page 1 of 2

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge