Alongside the 3900X and 3700X that we’re also reviewing, AMD launched its R5 3600 today to the public. We got a production sample of one of the R5 3600 CPUs through a third-party and, after seeing its performance, we wanted to focus first on this one for our initial Ryzen 3000 review. We’ve been recommending AMD’s R5 CPUs since the first generation, as Intel’s i5 CPUs have seen struggles lately in frametime consistency and are often close enough to AMD that the versatility, frametime consistency, and close-enough gaming performance have warranted R5 purchases. Today, we’re revisiting with the R5 3600 6-core, 12-thread CPU to look at gaming, production workloads with Premiere, Blender, V-Ray, and more, power consumption, and overclocking.
This week has been the busiest in our careers at GN. The editorial/testing team was two people, working in shifts literally around the clock for 24/7 bench coverage, and the video production team was three people (all credited at article's end, as always). We invested all we could into getting multiple reviews ready for launch day and will stagger publication throughout the day due to YouTube's distribution of content. We don't focus on ad revenue on the site these days and instead focus on our GN Store products and Patreon for revenue, plus ad revenue on YouTube. If you would like to support these colossal efforts, please consider buying one of our new GN Toolkits (custom-made for video card disassembly and system building, using high-quality CRV metals and our own molds) or one of our system building modmats. We also sell t-shirts, mousepads, video card anatomy posters, and more.
- Windows has all updates applied on all platforms, up to version 1903
- All BIOS updates and mitigations have been applied
- For new AMD Ryzen CPU testing, we are using a Gigabyte X570 Master motherboard with BIOS version FC5 installed, per manufacturer recommendations
- We have changed to GSkill Trident Z RGB memory at 4x8GB and 3200MHz. The 32GB capacity is needed for our Photoshop and Premiere benchmarks, which are memory-intensive and would throttle without the capacity.
The memory kit is an important change for us. Starting with these new reviews, we are now manually controlling every timing surfaced. That includes secondary and tertiary timings. Previously, we worked to control critical timings, like primary and RFC, but we are now controlling all timings manually. This has tightened our margin of error considerably and has reduced concern of “unfair” timings being auto-applied by the various motherboards we have to use for CPU reviews. “Unfair” in this instance typically means “uncharacteristically bad” as a result of poor tuning by the motherboard maker. By controlling this ourselves, we eliminate this variable. Some of our error margins have been reduced to 0.1FPS AVG as a result, which is fantastic.
Although we're at the end of the hardest testing cycle we've ever had, with many nights spent sleeping in the office (if sleeping at all), we're not even close to the end of it. There'll be follow-up and additional product testing throughout the next week, and that's all because of the joint launches of NVIDIA Super and AMD Navi GPUs, mixed in most importantly with AMD Ryzen 3000-series CPUs. New architectures take the longest to test, predictably, as everything we know has to be rebenchmarked to establish new behaviors in the processors. Anyway, with all of that, there's still news to cover. Show notes are after the embed.
Silicon quality and the so-called silicon lottery are often discussed in the industry, but it’s rare for anyone to have enough sample size to actually demonstrate what those phrases mean in practice. We asked Gigabyte to loan us as many of a single model of video card as they could so that we could demonstrate the frequency variance card-to-card at stock, the variations in overclocking headroom, and actual gaming performance differences from one card to the next. This helps to more definitively strike at the question of how much silicon quality can impact a GPU’s performance, particularly when stock, and also looks at memory overclocking and range of FPS in gaming benchmarks with a highly controlled bench and a ton of test passes per device. Finally, we can see the theory of how much one reviewer’s GPU might vary from another’s when running initial review testing.
AMD’s biggest ally for the RTX launch was NVIDIA, as the company crippled itself with unimplemented features and generational price creep out the gate. With RTX Super, NVIDIA demonstrates that it has gotten the memo from the community and has driven down its pricing while increasing performance. Parts of the current RTX line will be phased-out, with the newer, better parts coming into play and pre-empting AMD’s Navi launch. The 2070 Super is priced at $500, $50 above the 5700 XT, and the on-paper specs should put it about equivalent with an RTX 2080 in performance; it’s even using the TU-104 RTX 2080 die, further reinforcing this likely position. The 2060 Super sees a better bin with more unlocked SMs on the GPU, improving compute capabilities and framebuffer capacity beyond the initial 2060. Both of these things spell an embarrassing scenario about to unfold for AMD’s Radeon VII card, but we’ll have to wait another week to see how it plays-out for the yet unreleased Navi RX cards. There may be hope yet for AMD’s new lineup, but the existing lineup will face existential challenges from the freshly priced and face-lifted RTX Super cards. Today, we’re reviewing the new RTX Super cards with a fully revamped GPU testing methodology.
The first question is this: Why the name “Super?” Well, we asked, and nobody knows the answer. Some say that Jensen burned an effigy of a leather jacket and the word “SUPER” appeared in the toxic fumes above, others say that it’s a self-contained acronym. All we know is that it’s called “Super.”
Rather than pushing out Ti updates that co-exist with the original SKUs, NVIDIA is replacing the 2080 and 2070 SKUs with the new Super SKUs, while keeping the 2060 at $350. This is a “damned if you do, damned if you don’t” scenario. By pushing this update, NVIDIA shows that it’s listening – either to consumers or to competition – by bringing higher performing parts to lower price categories. At the same time, people who recently bought RTX cards may feel burned or feel buyer’s remorse. This isn’t just a price cut, which is common, but a fundamental change to the hardware. The RTX 2070 Super uses TU104 for the GPU rather than TU106, bumping it to a modified 2080 status. The 2060 stays on TU106, but also sees changes to SMs active and memory capacity.
Leading into the busiest hardware launch week of our careers, we talk about Intel's internal competitive analysis document leaking, DisplayPort 2.0 specifications being detailed, and Ubuntu dropping and re-adding 32-bit support. We also follow-up on Huawei news (and how Microsoft and Intel are still supporting it) and trade tensions.
Show notes continue after the embedded video.
Hardware news this week has a few more AMD rumors -- one of which we're debunking (X590) and another we're re-highlighting (B550) -- with additional news coverage of the US tariffs and impact to consumer pricing. On the topic of pricing, aside from an overall increase as a result of tariffs, Intel has expressed interest to reduce its desktop CPU prices by 10-15% with the launch of Ryzen.
The show notes continue below the embedded video.
Most of last week's hardware news revolved around AMD and its Navi and Ryzen product disclosures from the tech day, but plenty still happened during E3 week: Microsoft, for instance, announced its Scarlett console and rediscovered virtual memory, Comcast was caught violating the Consumer Protection Act over 445,000 times, USB 4.0 got lightly detailed for a 2020 launch, and more.
Show notes after the video embed.
AMD’s X570 chipset marks the arrival of some technology that was first deployed on Epyc, although that was done through the CPU as there isn’t a traditional chipset. With the shift to PCIe 4, X570 motherboards have grown more complex than X370 and X470, furthered by difficulties cooling the higher power consumption of X570. All of these changes mean that it’s time to compare the differences between X370, X470, and X570 motherboard chipsets, hopefully helping newcomers to Ryzen understand the changes.
The persistence of AMD’s AM4 socket, still slated for life through 2020, means that new CPUs are compatible with older chipsets (provided the motherboard makers update BIOS for detection). It also means that older CPUs (like the reduced price R5 2600X) are compatible with new motherboards, if you for some reason ended up with that combination. The only real downside, aside from potential cost of the latter option, is that new CPUs on old motherboards will mean no PCIe Gen4 support. AMD is disabling it in AGESA at launch, and unless a motherboard manufacturers finds the binary switch to flip in AGESA, it’ll be off for good. Realistically, this isn’t all that relevant: Most users will never touch the bandwidth of Gen4 for this round of products (in the future, maybe), and so the loss of running a new CPU on an old motherboard may be outweighed by the cost savings of keeping an already known-good board, provided the VRM is sufficient.
AMD’s technical press event bore information for both AMD Ryzen and AMD Navi, including overclocking information for Ryzen, Navi base, boost, and average clocks, architectural information and block diagrams, product-level specifications, and extreme overclocking information for Ryzen with liquid nitrogen. We understand both lines better now than before and can brief you on what AMD is working on. We’ll start with Navi specs, die size, and top-level architectural information, then move on to Ryzen. AMD also talked about ray tracing during its tech day, throwing some casual shade at NVIDIA in so doing, and we’ll also cover that here.
First, note that AMD did not give pricing to the press ahead of its livestream at E3, so this content will be live right around when the prices are announced. We’ll try to update with pricing information as soon as we see it, although we anticipate our video’s comments section will have the information immediately. UPDATE: Prices are $450 for the RX 5700 XT, $380 for the RX 5700.
AMD’s press event yielded a ton of interesting, useful information, especially on the architecture side. There was some marketing screwery in there, but a surprisingly low amount for this type of event. The biggest example was taking a thermographic image of two heatsinks to try and show comparative CPU temperature, even though the range was 23 to 27 degrees, which makes the delta look astronomically large despite being in common measurement error. Also, the heatsink actually should be hot because that means it’s working, and taking a thermographic image of a shiny metal object means you’re more showing reflected room temperature or encountering issues with emissivity, and ultimately they should just be showing junction temperature, anyway. This was our only major gripe with the event -- otherwise, the information was technical, detailed, and generally free of marketing BS. Not completely free of it, but mostly. The biggest issue with the comparison was the 28-degree result that exited the already silly 23-27 degree range, making it look like 28 degrees was somehow massively overheating.
Let’s start with the GPU side.
As we board another plane, just five days since landing home from Taipei, we're recapping news leading into next week's E3 event, positioned exhaustingly close to Computex. This recap talks AMD and Samsung partnerships on GPUs, Apple's $1000 monitor stand and accompanying cheese grater, and the Radeon Vega II dual-GPUs located therein. We also talk tariff impact on pricing in PC hardware and, as an exclusive story for the video version, we talk about the fake "X499" motherboard at Computex 2019.
Show notes below the video embed.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.