HW News - Definitely-Real Intel Arc GPU, Right to Repair Laws, & Apple M1 Vulnerabilities
Hardware news this week is flooded with silicon updates, but also features good news on the Right to Repair front.
We'll be recapping AMD's delidded Zen 4 CPU (Ryzen 7000), Intel's singular Arc GPU that "shipped," NVIDIA's first HBM3 GPU, and Apple's M1 vulnerability. Plenty more below!
Ask GN 73: NVidia 'Competing' with Board Partners? GDDR5 vs. HBM2?
We're back with Ask GN! It's been a long week of testing: Patrick has been working on FFXV, an Elgato 4K60 review, and other pieces; I've been working on managing the upcoming travel schedule, primarily for Computex and other tradeshows, and also have a whole slew of in-depth content coming up. One of our biggest endeavors for the week will be our upcoming livestream, where we intended to battle the LinusTechTips team for a top 10 spot in 3DMark benchmark rankings. It's a bit of a friendly rivalry, and we think you all will enjoy tuning in. We'll talk about that more soon.
We also have a news video going up tomorrow, as usual. The video will include several major news items for the past week, including some discussion of the nVidia GPP story that's been going around. Stay tuned for all of that.
In the meantime, Ask GN is below, and the timestamps are below that. Our Patreon bonus episode is here.
Timestamps
01:37 - David Watson: “Hey Steve , do you think we will end up seeing Nvidia competing with the aftermarket cards more directly by releasing their own non founders edition cards with new custom cooling solutions and heatsinks with double or triple fan designs ? i think its a fascinating prospect and one i feel Nvidia has had a lot of thought bent on and is surely bound to use much sooner rather than later because if they can get more marketshare then they surely will and i honestly feel its coming from Nvidia and could be massive for them , because i know one thing , if they released a badass new aftermarket card design that not only performed really well but looked really cool and actually ran cool then i for one would certainly be very tempted by it as many surely would Steve , do you feel this is on the horizon ? thanks man”
06:47 - Stank Buddha: “Quick question, regarding the 200fps limit in (newer?) games. Is this applied per monitor if you were using multi monitor or is it the actual game that is locked to to pushing 200fps total??? Like if the game is locked itself then a theoretical 3 60hz monitors would be maxing it out(60*3=180). Or can you do a 3 monitor 240hz and max em out each at 200fps. just wondering.”
08:57 - Michael Morgan: “Can you demonstrate the end user benefit of HBM memory over GDDR5 or GDDR5X on GPU's please?”
13:12 - vishal bobde: “#askgn-questions Why do CPU don't have different manufacturers like GPU. If there were more manufacturers we might get more enthusiasts features from factory like LM tim and better IHS.”
17:23 - Satoshi_Nakamoto: “@GN Staff Hey guys could you reach out to Thermaltake and ask them if they have any idea for the arrival of the Level 20 Case?”
18:14 – defenestrationize: “Steve, a massive limit limit for APUs is their need to use system memory. Do you think, APUs will remain on the low end or end up high end (is there not actually a limit to DDR for APUs) , More memory channels on APUs (possibly separate for CPU/GPU so 2 sticks DDR 2 sticks GDDR) or on chip memory (hbm) will appear in the near future? Given how board partners operate and push for chip consolidation , do you think we might see a MB, RAM, GPU, CPU as a single pcb ? Feel free to cut this question as needed to perhaps a simpler version.”
22:36 - Dayne_ofStarfall: “@Steve Burke Hello Steve, I’m a bit confused by case fans lately, specifically RPM and in relation to voltage. If I understand correctly different fans have different MAX and MIN RPM at a given voltage. But what happens when you connect two fans with different MIN/MAX RPM to a single header on the motherboard using a Y-splitter? Do the fans spin at different RPMs? And how does this work when they’re connected to a SATA-powered PWM Hub (like the one that comes with most Phanteks cases)? Also what is the amount of fans that can be safely connected to one header? I’ve read on forums that the cable or port can catch fire if they draw too much power, is this true? Thank you.”
24:25 - Ash_Borer: “#askgn-questions how do delidded (with LM) temperatures compare to soldered CPU temperatures? Do you ever plan to delid a ryzen and test the results? Im under the assumption that delidding provides better temps, so i dont mind that intel doesnt solder anymore - as an enthusiast i want to delid anyway and if its not soldered it is easier to delid.”
25:29 - Armand B.: “Modmat out of stock ? DAMN MINERS !”
25:53 - Nory The Explorer.exe: “@GN Staff What is an important fact, viewers should know about GamersNexus?(edited) And the opposite, what is a big misconception viewers have expressed about GamersNexus?”
Host: Steve Burke
Video: Andrew Coleman
AMD’s High-Bandwidth Cache Controller protocol is one of the keystones to the Vega architecture, marked by RTG lead Raja Koduri as a personal favorite feature of Vega, and highlighted in previous marketing materials as offering a potential 50% uplift in average FPS when in VRAM-constrained scenarios. With a few driver revisions now behind us, we’re revisiting our Vega 56 hybrid card to benchmark HBCC in A/B fashion, testing in memory-constrained scenarios to determine efficacy in real gaming workloads.
Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.
There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.
Let’s start with prices, then talk architectural requirements.
This week's hardware news recap covers rumors of Corsair's partial acquisition, HBM2 production ramping, Threadripper preparation, and a few other miscellaneous topics. Core industry topics largely revolve around cooler prep for Threadripper this week, though HBM2 increasing production output (via Samsung) is also a critical item of note. Both nVidia and AMD now deploy HBM2 in their products, and other devices are beginning to eye use cases for HBM2 more heavily.
The video is embedded below. As usual, the show notes rest below that.
Ask GN 28: HBM on CPUs, GPU Boost 3.0 Curiosities, & More Test Methods
This episode of Ask GN (#28) addresses the concept of HBM in non-GPU applications, primarily concerning its imminent deployment on CPUs. We also explore GPU Boost 3.0 and its variance within testing when working on the new GTX 1080 cards. The question of Boost's functionality arose as a response to our EVGA GTX 1080 FTW Hybrid vs. MSI Sea Hawk 1080 coverage, and asked why one 1080 was clock-dropping differently from another. We talk about that in this episode.
Discussion begins with proof that the Cullinan finally exists and has been sent to us – because it was impossible to find, after Computex – and carries into Knights Landing (Intel) coverage for MCDRAM, or “CPU HBM.” Testing methods are slotted in between, for an explanation on why some hardware choices are made when building a test environment.
HW News: Samsung GDDR6, HBM3 R&D, PCI-e Gen4 Power, & Zen CCX Arch
In additional hardware news to what we published yesterday -- a look at Intel's Kaby Lake (7600K, 7700K, etc.), the X2 Empire unique enclosure, and Logitech's G Pro mouse -- we are today visiting topics of Samsung's GDDR6, SK Hynix's HBM3 R&D, PCIe Gen4 power budget, and Zen's CCX architecture.
The biggest news here is Samsung's GDDR6, due for 2018, but it's all important stuff. PCI-e Gen4 is looking at being fully ratified EOY 2016, HBM3 is in R&D, and Zen is imminent and finalized architecturally. We'll talk about it more specifically in our reviews.
Update: Tom's misreported on PCI-e power draw. The Gen4 PCIe interface will still be 75W.
Anyway, here's the news recap:
Transcript
Memory manufacturer Samsung is developing GDDR6 as a successor to Micron's brand new GDDR5X, presently only found in the GTX 1080 and Titan XP cards. GDDR6 may feel like a more meaningful successor to GDDR5, though, which has been in production use since 2008.
In its present, fully matured form, GDDR5 operates at 8Gbps maximally, including on the RX 480 and GTX 10 series GPUs. Micron demonstrated GDDR5X as capable of approaching 12-13Gbps with proper time to mature the architecture, but is presently shipping the memory in 10Gbps speeds for the nVidia devices.
Samsung indicates an operating range of approximately 14Gbps to 16Gbps on GDDR6 at 1.35V, coupled with lower voltages than even GDDR5X by using LP4X. Samsung indicates a power reduction upwards of 20% with post-LP4 memory technology.
Samsung is looking toward 2018 for production of GDDR6, giving GDDR5X some breathing room yet. As for HBM, SK Hynix is already looking toward HBM3, with HBM2 only presently available in the GP100 Accelerator cards. HBM3 will theoretically run a 4096-bit interface with upwards of 2TB/s throughput, at 512GB/s per stack. We'll talk about this tech more in the semi-distant future.
PCIe
Tom's Hardware this week reported on the new PCI Express 4.0 specification, primarily detailing a push toward a minimum spec of 300W power transfer through the slot, but could be upwards of 500W. Without even talking about the bandwidth promises – moving to nearly 2GB/s for a single lane – the increase of power budget will mean that the industry could begin a shift away from PCI-e cables. The power would obviously still come form the power supply, but would be delivered through pins in the PCI-e slots rather than through an extra cable.
This same setup is what allows cards like a 750 Ti to function only off the PCI-e slot, because the existing spec allows for 75W to push through the PCIe bus. PCI-e 4.0 should be ratified by the end of 2016 by the PCI-SIG team, but we don't yet know the roll-out plans for consumer platforms.
Zen
AMD also detailed more of its Zen CPU architecture, something we talked about last week when the company camped out near IDF for an unveil event. The Summit Ridge chips have primarily been on display thus far, showing an 8C/16T demo with AMD's implementation of SMT, but we haven't heard much about other processors.
AMD is ditching modules in favor of CPU Complexes, or a CCX, each of which will host four CPU cores. Each CCX runs 512KB of L2 Cache per core, as seen in this block diagram, with L3 sliced into four pieces for 8MB total low-order address interleave cache. AMD says that each core can communicate with all cache on the CCX, and promises the same latency for all accesses.
It looks like the lowest SKU chips will still be quad-cores at a minimum.
Host: Steve "Lelldorianx" Burke
Video: Andrew "ColossalCake" Coleman
New video cards are coming out furiously and bringing with them new manufacturing processes and better price-to-performance ratios.
One of newest memory technologies on the market is HBM (High Bandwidth Memory), introduced on the R9 Fury X. HBM stacks 4 memory dies atop an interposer (packaged on the substrate) to get higher density modules, while also bringing down power consumption and reducing physical transaction distance. HBM is not located on the GPU die itself, but is on the GPU package – much closer than PCB-bound GDDR5/5X memory modules.
AMD's GPU architecture roadmap from its Capsaicin event revealed the new “Vega” and “Navi” architectures, which have effectively moved the company to a stellar naming system. A reasonable move away from things associated with hot, at least – Volcanic Islands, Hawaii, and Capsaicin included.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.