Hardware Guides

Memory speed on Ryzen has always been a hot subject, with AMD’s 1000 and 2000 series CPUs responding favorably to fast memory while at the same time having difficulty getting past 3200MHz in Gen1. The new Ryzen 3000 chips officially support memory speeds up to 3200MHz and can reliably run kits up to 3600MHz, with extreme overclocks up to 5100MHz. For most people, this type of clock isn’t achievable, but frequencies in the range of 3200 to 4000MHz are done relatively easily, but then looser timings become a concern. Today, we’re benchmarking various memory kits at XMP settings, with Ryzen memory DRAM calculator, and with manual override overclocking. We’ll look at the trade-off of higher frequencies versus tighter timings to help establish the best memory solutions for Ryzen.

One of the biggest points to remember during all of this -- and any other memory testing published by other outlets -- is that motherboard matters almost more than the memory kit itself. Motherboards are responsible for most of the timings auto configured on memory kits, even when using XMP, as XMP can only store so much data per kit. The rest, including unsurfaced timings that the user never sees, are done during memory training by the motherboard. Motherboard manufacturers maintain a QVL (Qualified Vendor List) of kits tested and approved on each board, and we strongly encourage system builders to check these lists rather than just buying a random kit of memory. Motherboard makers will even tune timings for some kits, so there’s potentially a lot of performance lost by using mismatched boards and memory.

We’ve gotten into the habit of fixing video cards lately, a sad necessity in an era plagued with incomplete, penny-pinching designs that overlook the basics, like screw tension, coldplate levelness, and using thermal pads that are about 60% smaller than they should be. MSI’s RX 5700 XT Evoke OC (review here) is the newest in this growing list of cards that any user could fix, unfortunately, and it’s for reasons we illustrated best in our tear-down of the card. Our testing illustrated that its cooling capabilities are sub-par when compared to the Sapphire Pulse, and not only that, but that the memory temperatures are concerningly high when noise-normalized in our benchmarks. Today, we’re fixing that with properly sized thermal pads.

This is a quick and straightforward piece inspired by a Reddit post from about a week ago. The reddit post was itself a response to a video where a YouTuber claimed to be lowering temperatures and boosting performance on Ryzen 3000 CPUs by lowering the vcore value in BIOS; we never did catch the video, as it has since been retracted and followed-up by the creator and community with new information. Even though the original content was too good to be true, it was still based on a completely valid idea -- lowering voltage, 50% of the equation for power -- will theoretically reduce thermals and power load. The content ended up indirectly demonstrating some unique AMD Ryzen 3000 behaviors that we thought worth testing for ourselves. In this video, we’ll demonstrate how to know when undervolting is working versus not working, talk about the gains or losses, and get some hard numbers for the Master and Godlike motherboards.

With the launch of the Ryzen 3000 series processors, we’ve noticed a distinct confusion among readers and viewers when it comes to the phrases “Precision Boost 2,” “XFR,” “Precision Boost Overdrive,” which is different from Precision Boost, and “AutoOC.” There is also a lot of confusion about what’s considered stock, what PBO even does or if it works at all, and how thermals impact frequency of Ryzen CPUs. Today, we’re demystifying these names and demonstrating the basic behaviors of each solution as tested on two motherboards.

Precision Boost Overdrive is a technology new to Ryzen desktop processors, having first been introduced in Threadripper chips; technically, Ryzen 3000 uses Precision Boost 2. PBO is explicitly different from Precision Boost and Precision Boost 2, which is where a lot of people get confused. “Precision Boost” is not an abbreviation for “Precision Boost Overdrive,” it’s actually a different thing: Precision Boost is like XFR, AMD’s Extended Frequency Range boosting table for boosting a limited number of cores when possible. XFR was introduced with the first Ryzen series CPUs. Precision Boost takes into account three numbers in deciding how many cores can boost and when, and those numbers are PPT, TDC, and EDC, as well as temperature and the chip’s max boost clock. Precision Boost is enabled on a stock CPU, Precision Boost Overdrive is not. What PBO does not ever do is boost the frequency beyond the advertised CPU clocks, which is a major point that people have confused. We’ll quote directly from AMD’s review documentation so that there is no room for confusion:

Silicon quality and the so-called silicon lottery are often discussed in the industry, but it’s rare for anyone to have enough sample size to actually demonstrate what those phrases mean in practice. We asked Gigabyte to loan us as many of a single model of video card as they could so that we could demonstrate the frequency variance card-to-card at stock, the variations in overclocking headroom, and actual gaming performance differences from one card to the next. This helps to more definitively strike at the question of how much silicon quality can impact a GPU’s performance, particularly when stock, and also looks at memory overclocking and range of FPS in gaming benchmarks with a highly controlled bench and a ton of test passes per device. Finally, we can see the theory of how much one reviewer’s GPU might vary from another’s when running initial review testing.

AMD’s X570 chipset marks the arrival of some technology that was first deployed on Epyc, although that was done through the CPU as there isn’t a traditional chipset. With the shift to PCIe 4, X570 motherboards have grown more complex than X370 and X470, furthered by difficulties cooling the higher power consumption of X570. All of these changes mean that it’s time to compare the differences between X370, X470, and X570 motherboard chipsets, hopefully helping newcomers to Ryzen understand the changes.

The persistence of AMD’s AM4 socket, still slated for life through 2020, means that new CPUs are compatible with older chipsets (provided the motherboard makers update BIOS for detection). It also means that older CPUs (like the reduced price R5 2600X) are compatible with new motherboards, if you for some reason ended up with that combination. The only real downside, aside from potential cost of the latter option, is that new CPUs on old motherboards will mean no PCIe Gen4 support. AMD is disabling it in AGESA at launch, and unless a motherboard manufacturers finds the binary switch to flip in AGESA, it’ll be off for good. Realistically, this isn’t all that relevant: Most users will never touch the bandwidth of Gen4 for this round of products (in the future, maybe), and so the loss of running a new CPU on an old motherboard may be outweighed by the cost savings of keeping an already known-good board, provided the VRM is sufficient.

Our viewers have long requested that we add standardized case fan placement testing in our PC case reviews. We’ve previously talked about why this is difficult – largely logistically, as it’s neither free in cost nor free in time – but we are finally in a good position to add the testing. The tests, we think, clearly must offer some value, because it is one of our most-requested test items over the past two years. We ultimately want to act on community interests and explore what the audience is curious about, and so we’ve added tests for standardized case fan benchmarking and for noise normalized thermal testing.

Normalizing for noise and running thermal tests has been our main, go-to benchmark for PC cooler testing for about 2-3 years now, and we’ve grown to really appreciate the approach to benchmarking. Coolers are simpler than cases, as there’s not really much in the way of “fan placement,” and normalizing for a 40dBA level has allowed us to determine which coolers have the most efficient means of cooling when under identical noise conditions. As we’ve shown in our cooler reviews, this bypasses the issue where a cooler with significantly higher RPM always chart-tops. It’s not exactly fair if a cooler at 60dBA “wins” the thermal charts versus a bunch of coolers at, say, 35-40dBA, and so normalizing the noise level allows us to see if any proper differences emerge when the user is subjected to the same “volume” from their PC cooling products. We have also long used these for GPU cooler reviews. It’s time to introduce it to case reviews, we think, and we’ll be doing that by sticking with the stock case fan configuration and reducing case fan RPMs equally to meet the target noise level (CPU and GPU cooler fans remain unchanged, as these most heavily dictate CPU and GPU coolers; they are fixed speeds constantly).

One of our most popular videos of yore talks about the GTX 960 4GB vs. GTX 960 2GB cards and the value of choosing one over the other. The discussion continues today, but is more focused on 3GB vs. 6GB comparisons, or 4GB vs. 8GB comparisons. Now, looking back at 2015’s GTX 960, we’re revisiting with locked frequencies to compare memory capacities. The goal is to look at both framerate and image quality to determine how well the 2GB card has aged versus how well the 4GB card has aged.

A lot of things have changed for us since our 2015 GTX 960 comparison, so these results will obviously not be directly comparable to the time. We’re using different graphics settings, different test methods, a different OS, and much different test hardware. We’ve also improved our testing accuracy significantly, and so it’s time to take all of this new hardware and knowledge and re-apply it to the GTX 960 2GB vs. 4GB debate, looking into whether there was really a “longevity” argument to be made.

ASUS grew impatient waiting for Samsung to reach volume production on its 32GB DDR4 UDIMMs, and so the company instead designed a new double capacity DIMM standard. This isn’t a JEDEC standard, but is a standard that has gotten some attention from ZADAK and GSkill, both of whom have made some of the tallest memory modules the world has seen. These DIMMs are 32GB per stick, so two of them give us 64GB at 3200MHz and, after overclocking effort, some pretty good timings. Two of these sticks would cost you about $1000, with the 3600MHz options at $1300. Today, we’ll be looking into when they can be used and how well they overclock.

These are double-capacity DIMMs, achieved by making the PCB significantly taller than ordinary RAM. More memory fits on a single stick, making it theoretically possible to approach the max of the CPU’s memory controller. This is difficult to do, as signal integrity starts to become threatened as the PCB grows larger and more complex.

This is an exciting milestone for us: We’ve completely overhauled our CPU testing methodology for 2019, something we first detailed in our GamersNexus 2019 Roadmap video. New testing includes more games than before, tested at two resolutions, alongside workstation benchmarks. These are new for us, but we’ve added program compile workloads, Adobe Premiere, Photoshop, compression and decompression, V-Ray, and more. Today is the unveiling of half of our new testing methodology, with the games getting unveiled separately. We’re starting with a small list of popular CPUs and will add as we go.

We don’t yet have a “full” list of CPUs, naturally, as this is a pilot of our new testing procedures for workstation benchmarks. As new CPUs launch, we’ll continue adding their most immediate competitors (and the new CPUs themselves) to our list of tested devices. We’ve had a lot of requests to add some specific testing to our CPU suite, like program compile testing, and today marks our delivery of those requests. We understand that many of you have other requests still awaiting fulfillment, and want you to know that, as long as you tweet them at us or post them on YouTube, there is a good chance we see them. It takes us about 6 months to a year to change our testing methodology, as we try to stick with a very trustworthy set of tests before introducing potential new variables. This test suite has gone through a few months of validation, so it’s time to try it out in the real world.

Page 1 of 22

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge