HW News - AMD AGESA 1003ABA Bugs, Arcturus GPU, & Intel Rumors for October
By Eric HamiltonHardware news for this week is a bit sluggish, with Amazon’s Prime Day -- and the ensuing unrepentant consumerism -- seeming to occupy more than its share of headlines this week. Still, we’ve curated some of the more interesting stories including the latest report from Digitimes and an elucidating interview where Intel CEO Robert Swan cites being “too aggressive” as a key factor in Intel’s CPU shortage. Other topics include information on AMD’s Arcturus GPUs and what form they could take, a Toshiba Memory rebrand, and NZXT adding to its pre-built machines catalog.
In recent GN news, we’ve delved ever further into Ryzen 3000 and the Zen 2 architecture, including a deep dive into AMD’s Precision Boost Overdrive algorithm, looking at how Ryzen 3000 frequencies scale with temperature, and our R9 3900X overclocking stream.
Coolers & Cases Really Matter for Ryzen 3000 CPUs | Thermal Scaling & Frequency
By Steve BurkeIn some ways, AMD has become NVIDIA, and it’s not necessarily a bad thing. The way new Ryzen CPUs scale is behaviorally similar to the way GPU Boost 4.0 scales on GPUs, where simply lowering the silicon operating temperature will directly affect performance and clock speeds. Under complete, full stock settings, a CPU running colder will actually boost higher now; alternatively, if you’re a glass half-empty type, you could view it such that a CPU running hotter will thermally throttle. Either way, frequency is contingent upon thermals, and that’s important for users who want to maximize performance or pick the right case and CPU cooling combination. If you’re new to the space, the way it has traditionally worked is that CPUs will perform at one spec, with one set of frequencies, until hitting TjMax, or maximum Junction temperature. Ryzen 3000 is significantly different from past CPUs in this regard. Some excursions from this behavior do exist, but are a different behavior and are well-known. One such example would include Turbo Boost durations, which are explicitly set by the motherboard to limit the duration for which an Intel CPU can reach its all-core Turbo. This is a different matter entirely from frequency/cold scale.
An Intel CPU is probably the easiest example to use for pre-Ryzen 3000 behavior. With Intel, there are only two real parameters to consider: The Turbo boost duration limit, which we have a separate content piece on (linked above), and the power limit. If operating within spec, outside of the turbo duration limit of roughly 90-120 seconds, the CPU will stick to one all-core clock speed for the entirety of its workload. You could be running at 90 degrees or 40 degrees, it’ll be the same frequency. Once you hit TjMax, let’s say it’s 95 or 100 degrees Celsius, there’s either a multiplier throttle or a thermal shutdown, the choice between which will hinge upon how the motherboard is configured to respond to TjMax.
Explaining AMD Ryzen Precision Boost Overdrive (PBO), AutoOC, & Benchmarks
By Patrick Lathan Published July 15, 2019 at 10:45 pmWith the launch of the Ryzen 3000 series processors, we’ve noticed a distinct confusion among readers and viewers when it comes to the phrases “Precision Boost 2,” “XFR,” “Precision Boost Overdrive,” which is different from Precision Boost, and “AutoOC.” There is also a lot of confusion about what’s considered stock, what PBO even does or if it works at all, and how thermals impact frequency of Ryzen CPUs. Today, we’re demystifying these names and demonstrating the basic behaviors of each solution as tested on two motherboards.
Precision Boost Overdrive is a technology new to Ryzen desktop processors, having first been introduced in Threadripper chips; technically, Ryzen 3000 uses Precision Boost 2. PBO is explicitly different from Precision Boost and Precision Boost 2, which is where a lot of people get confused. “Precision Boost” is not an abbreviation for “Precision Boost Overdrive,” it’s actually a different thing: Precision Boost is like XFR, AMD’s Extended Frequency Range boosting table for boosting a limited number of cores when possible. XFR was introduced with the first Ryzen series CPUs. Precision Boost takes into account three numbers in deciding how many cores can boost and when, and those numbers are PPT, TDC, and EDC, as well as temperature and the chip’s max boost clock. Precision Boost is enabled on a stock CPU, Precision Boost Overdrive is not. What PBO does not ever do is boost the frequency beyond the advertised CPU clocks, which is a major point that people have confused. We’ll quote directly from AMD’s review documentation so that there is no room for confusion:
HW News - Fake Intel "Leaks" for Comet Lake, Ryzen 2000 Price Cuts
By Eric HamiltonGN just notched one of its busiest weeks ever, thanks to relentless product launches from AMD and Nvidia. We’ve recently reviewed Nvidia's RTX 2070 Super and RTX 2060 Super, in addition to AMD’s Ryzen 5 3600, Ryzen 9 3900X, and Radeon RX 5700 XT. We also have multiple videos further analyzing Ryzen 3000 boost clocks and the RX 5700 XT cooling solution.
If you’ve enjoyed this coverage, please consider supporting our focused efforts through a GN store purchase.
For mostly non-AMD related news this week, Intel has announced multiple new technologies focused on chip packaging, in addition to hiring a new CCO in Claire Dixon. MSI is updating its AM4 400-series of motherboard to include a larger BIOS chip, there’s a new PCIe 4.0 SSD coming, with a presumably cheaper 500GB capacity, and we’re expecting custom Navi cards in August. The news stories follow the video embed, per the usual.
AMD Ryzen 5 3600 CPU Review & Benchmarks: Strong Recommendation from GN
By Steve Burke & Patrick LathanAlongside the 3900X and 3700X that we’re also reviewing, AMD launched its R5 3600 today to the public. We got a production sample of one of the R5 3600 CPUs through a third-party and, after seeing its performance, we wanted to focus first on this one for our initial Ryzen 3000 review. We’ve been recommending AMD’s R5 CPUs since the first generation, as Intel’s i5 CPUs have seen struggles lately in frametime consistency and are often close enough to AMD that the versatility, frametime consistency, and close-enough gaming performance have warranted R5 purchases. Today, we’re revisiting with the R5 3600 6-core, 12-thread CPU to look at gaming, production workloads with Premiere, Blender, V-Ray, and more, power consumption, and overclocking.
This week has been the busiest in our careers at GN. The editorial/testing team was two people, working in shifts literally around the clock for 24/7 bench coverage, and the video production team was three people (all credited at article's end, as always). We invested all we could into getting multiple reviews ready for launch day and will stagger publication throughout the day due to YouTube's distribution of content. We don't focus on ad revenue on the site these days and instead focus on our GN Store products and Patreon for revenue, plus ad revenue on YouTube. If you would like to support these colossal efforts, please consider buying one of our new GN Toolkits (custom-made for video card disassembly and system building, using high-quality CRV metals and our own molds) or one of our system building modmats. We also sell t-shirts, mousepads, video card anatomy posters, and more.
Notable changes to our testing methods, other than overhauling literally everything (workstation overhaul, gaming overhaul) a few months ago, would include the following:
- Windows has all updates applied on all platforms, up to version 1903
- All BIOS updates and mitigations have been applied
- For new AMD Ryzen CPU testing, we are using a Gigabyte X570 Master motherboard with BIOS version FC5 installed, per manufacturer recommendations
- We have changed to GSkill Trident Z RGB memory at 4x8GB and 3200MHz. The 32GB capacity is needed for our Photoshop and Premiere benchmarks, which are memory-intensive and would throttle without the capacity.
The memory kit is an important change for us. Starting with these new reviews, we are now manually controlling every timing surfaced. That includes secondary and tertiary timings. Previously, we worked to control critical timings, like primary and RFC, but we are now controlling all timings manually. This has tightened our margin of error considerably and has reduced concern of “unfair” timings being auto-applied by the various motherboards we have to use for CPU reviews. “Unfair” in this instance typically means “uncharacteristically bad” as a result of poor tuning by the motherboard maker. By controlling this ourselves, we eliminate this variable. Some of our error margins have been reduced to 0.1FPS AVG as a result, which is fantastic.
HW News - AMD Navi GPU Price Cut Pre-Launch, Ray Tracing & Peltier Cooling Patents
By Eric HamiltonAlthough we're at the end of the hardest testing cycle we've ever had, with many nights spent sleeping in the office (if sleeping at all), we're not even close to the end of it. There'll be follow-up and additional product testing throughout the next week, and that's all because of the joint launches of NVIDIA Super and AMD Navi GPUs, mixed in most importantly with AMD Ryzen 3000-series CPUs. New architectures take the longest to test, predictably, as everything we know has to be rebenchmarked to establish new behaviors in the processors. Anyway, with all of that, there's still news to cover. Show notes are after the embed.
GPU Silicon Quality & OC Lottery Test: Differences of Each Video Card's Frequency
By Steve Burke Published July 04, 2019 at 11:51 pmSilicon quality and the so-called silicon lottery are often discussed in the industry, but it’s rare for anyone to have enough sample size to actually demonstrate what those phrases mean in practice. We asked Gigabyte to loan us as many of a single model of video card as they could so that we could demonstrate the frequency variance card-to-card at stock, the variations in overclocking headroom, and actual gaming performance differences from one card to the next. This helps to more definitively strike at the question of how much silicon quality can impact a GPU’s performance, particularly when stock, and also looks at memory overclocking and range of FPS in gaming benchmarks with a highly controlled bench and a ton of test passes per device. Finally, we can see the theory of how much one reviewer’s GPU might vary from another’s when running initial review testing.
NVIDIA RTX 2060 Super & 2070 Super Review: Killing Radeon VII
By Steve BurkeAMD’s biggest ally for the RTX launch was NVIDIA, as the company crippled itself with unimplemented features and generational price creep out the gate. With RTX Super, NVIDIA demonstrates that it has gotten the memo from the community and has driven down its pricing while increasing performance. Parts of the current RTX line will be phased-out, with the newer, better parts coming into play and pre-empting AMD’s Navi launch. The 2070 Super is priced at $500, $50 above the 5700 XT, and the on-paper specs should put it about equivalent with an RTX 2080 in performance; it’s even using the TU-104 RTX 2080 die, further reinforcing this likely position. The 2060 Super sees a better bin with more unlocked SMs on the GPU, improving compute capabilities and framebuffer capacity beyond the initial 2060. Both of these things spell an embarrassing scenario about to unfold for AMD’s Radeon VII card, but we’ll have to wait another week to see how it plays-out for the yet unreleased Navi RX cards. There may be hope yet for AMD’s new lineup, but the existing lineup will face existential challenges from the freshly priced and face-lifted RTX Super cards. Today, we’re reviewing the new RTX Super cards with a fully revamped GPU testing methodology.
The first question is this: Why the name “Super?” Well, we asked, and nobody knows the answer. Some say that Jensen burned an effigy of a leather jacket and the word “SUPER” appeared in the toxic fumes above, others say that it’s a self-contained acronym. All we know is that it’s called “Super.”
Rather than pushing out Ti updates that co-exist with the original SKUs, NVIDIA is replacing the 2080 and 2070 SKUs with the new Super SKUs, while keeping the 2060 at $350. This is a “damned if you do, damned if you don’t” scenario. By pushing this update, NVIDIA shows that it’s listening – either to consumers or to competition – by bringing higher performing parts to lower price categories. At the same time, people who recently bought RTX cards may feel burned or feel buyer’s remorse. This isn’t just a price cut, which is common, but a fundamental change to the hardware. The RTX 2070 Super uses TU104 for the GPU rather than TU106, bumping it to a modified 2080 status. The 2060 stays on TU106, but also sees changes to SMs active and memory capacity.
HW News - Intel Asks: "Is Intel Screwed?", DisplayPort 2.0 & 16K Monitor Support
By Eric HamiltonLeading into the busiest hardware launch week of our careers, we talk about Intel's internal competitive analysis document leaking, DisplayPort 2.0 specifications being detailed, and Ubuntu dropping and re-adding 32-bit support. We also follow-up on Huawei news (and how Microsoft and Intel are still supporting it) and trade tensions.
Show notes continue after the embedded video.
HW News - Wrong X590 Rumors, Intel CPU Price Reduction, B550 Chipset
By Eric HamiltonHardware news this week has a few more AMD rumors -- one of which we're debunking (X590) and another we're re-highlighting (B550) -- with additional news coverage of the US tariffs and impact to consumer pricing. On the topic of pricing, aside from an overall increase as a result of tariffs, Intel has expressed interest to reduce its desktop CPU prices by 10-15% with the launch of Ryzen.
The show notes continue below the embedded video.
More...
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.