With the US Thanksgiving holiday right around the corner, sales and discounts have begun making it almost affordable to build a PC again after months of high prices. One component that has seen huge price increases over 2017 has been DRAM, with little respite over the months. We found some deals on DDR4 RAM this week, so if you are in the market for a new kit or an upgrade, this is good news. Additionally, if you are someone looking for a CPU to go with a new kit of RAM, consider checking out the recent AMD CPU sale article or the Best CPUs of 2017 article for more.
During a presentation at the USB Global Technology Conference, Intel indicated that the roadmap for Intel Optane DIMMs lands their proprietary memory somewhere in the second half of 2018. Thus far, we’ve seen the storage and caching side of Intel Optane 3D XPoint. It seems in 2018, we’ll be afforded the opportunity to witness 3D XPoint as main memory.
The latest report out of TrendForce and DRAMeXchange indicates that the already-high DRAM prices will continue to climb through 2018. Original shortages were accused of being fallout from impending Samsung and iPhone major launches this year, but new information points toward a slow-down in production out of the big three memory manufacturers (Samsung, Micron, SK Hynix). The three companies claim to be running R&D efforts for future technologies, but the fact that all three coincide does mean that each group can continue to enjoy exceptionally high margins into the future.
Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.
There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.
Let’s start with prices, then talk architectural requirements.
Where video cards have had to deal with mining cost, memory and SSD products have had to deal with NAND supply and cost. Looks like video cards may soon join the party, as – according to DigiTimes and sources familiar with SK Hynix & Samsung supply – quotes in August increased 30.8% for manufacturers. That’s a jump from $6.50 in July to $8.50 in August.
It sounds as if this stems from a supply-side deficit, based on initial reporting, and that’d indicate that products with a higher count of memory modules should see a bigger price hike. From what we’ve read, mobile devices (like gaming notebooks) may be more immediately impacted, with discrete cards facing indeterminate impact at this time.
We’ve been writing about the latest memory and Flash price increases for a bit now – and this does seem to happen every few years – but relief remains distant. The memory supply is limited for a few reasons right now, including new R&D processes by the big suppliers (Samsung, Toshiba, SK Hynix, Micron) as some of the suppliers attempt to move toward new process technology. More immediately and critical, the phone industry’s launch cycle is on the horizon, and that means drastically increased memory sales to phone vendors. Supply is finite – it has to come out of inventory somewhere, and that tends to be components. As enthusiasts, that’s where we see the increased prices come into play.
Professional overclocker Toppc recently set another world record for DDR4 SDRAM frequency. Using a set of G.SKILL DDR4 sticks (an unidentified kit from the Trident Z RGB line) bestriding an MSI X299 Gaming Pro Carbon AC motherboard, Toppc was able to achieve a 5.5 GHz DDR4 frequency—approximately a 500 MHz improvement over his record from last year.
Toppc’s new record is verified by HWBot, accompanied by a screenshot of CPU-Z and Toppc’s extreme cooling setup, which involved LN2. Although an exact temperature was not provided, and details on the aforementioned G.SKILL kit are scant, we do know that the modules used Samsung 8GB ICs. Based on the limited information, we can infer or postulate that this is probably a new product from G.SKILL, as they announced new memory kits at Computex.
We recently covered Intel’s DC P4800X data center drive, with takes on the technology from two editors in video and article form. Those content pieces served as a technology overview for 3D Xpoint and Intel Optane (and should be referenced as primer material), but both indicated a distinct lack of any consumer-focused launch for the new half-memory, half-storage amalgam.
Today, we’re back to discuss Intel’s Optane Memory modules, which will ship April 24 in the form of M.2 sticks.
As Intel’s platform for 3D Xpoint (Micron also has one: QuantX), Optane will be deployed on standardized interfaces like PCI-e AICs, M.2, and eventually DIMM form factors. This means no special “Optane port,” so to speak, and should make adoption at least somewhat more likely. There’s still a challenging road ahead for Intel, of course, as Optane has big goals to somewhat unify memory and storage by creating a device with storage-like capacities and memory-like latencies. For more of a technology overview, check out Patrick Stone’s article on the DC P4800X.
The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?
In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.
While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.
At the tail-end of a one-day trip across the country, this episode of Ask GN tides us over until our weekend burst of further content production. We’re currently working on turning around a few case reviews, some game benchmarks, and implementing new thermal calibrators and high-end equipment.
In the meantime, this episode addresses questions involving “doubled” DRAM prices, delidding plans for the i7-7700K, contact between a heatsink and the back of a video card, and a few other topics. Check back posthaste as we’ll ramp into publication of our i5-7600K review within the next day.
Video below, timestamps below that:
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.