Hardware Guides

AMD’s X570 chipset marks the arrival of some technology that was first deployed on Epyc, although that was done through the CPU as there isn’t a traditional chipset. With the shift to PCIe 4, X570 motherboards have grown more complex than X370 and X470, furthered by difficulties cooling the higher power consumption of X570. All of these changes mean that it’s time to compare the differences between X370, X470, and X570 motherboard chipsets, hopefully helping newcomers to Ryzen understand the changes.

The persistence of AMD’s AM4 socket, still slated for life through 2020, means that new CPUs are compatible with older chipsets (provided the motherboard makers update BIOS for detection). It also means that older CPUs (like the reduced price R5 2600X) are compatible with new motherboards, if you for some reason ended up with that combination. The only real downside, aside from potential cost of the latter option, is that new CPUs on old motherboards will mean no PCIe Gen4 support. AMD is disabling it in AGESA at launch, and unless a motherboard manufacturers finds the binary switch to flip in AGESA, it’ll be off for good. Realistically, this isn’t all that relevant: Most users will never touch the bandwidth of Gen4 for this round of products (in the future, maybe), and so the loss of running a new CPU on an old motherboard may be outweighed by the cost savings of keeping an already known-good board, provided the VRM is sufficient.

Our viewers have long requested that we add standardized case fan placement testing in our PC case reviews. We’ve previously talked about why this is difficult – largely logistically, as it’s neither free in cost nor free in time – but we are finally in a good position to add the testing. The tests, we think, clearly must offer some value, because it is one of our most-requested test items over the past two years. We ultimately want to act on community interests and explore what the audience is curious about, and so we’ve added tests for standardized case fan benchmarking and for noise normalized thermal testing.

Normalizing for noise and running thermal tests has been our main, go-to benchmark for PC cooler testing for about 2-3 years now, and we’ve grown to really appreciate the approach to benchmarking. Coolers are simpler than cases, as there’s not really much in the way of “fan placement,” and normalizing for a 40dBA level has allowed us to determine which coolers have the most efficient means of cooling when under identical noise conditions. As we’ve shown in our cooler reviews, this bypasses the issue where a cooler with significantly higher RPM always chart-tops. It’s not exactly fair if a cooler at 60dBA “wins” the thermal charts versus a bunch of coolers at, say, 35-40dBA, and so normalizing the noise level allows us to see if any proper differences emerge when the user is subjected to the same “volume” from their PC cooling products. We have also long used these for GPU cooler reviews. It’s time to introduce it to case reviews, we think, and we’ll be doing that by sticking with the stock case fan configuration and reducing case fan RPMs equally to meet the target noise level (CPU and GPU cooler fans remain unchanged, as these most heavily dictate CPU and GPU coolers; they are fixed speeds constantly).

One of our most popular videos of yore talks about the GTX 960 4GB vs. GTX 960 2GB cards and the value of choosing one over the other. The discussion continues today, but is more focused on 3GB vs. 6GB comparisons, or 4GB vs. 8GB comparisons. Now, looking back at 2015’s GTX 960, we’re revisiting with locked frequencies to compare memory capacities. The goal is to look at both framerate and image quality to determine how well the 2GB card has aged versus how well the 4GB card has aged.

A lot of things have changed for us since our 2015 GTX 960 comparison, so these results will obviously not be directly comparable to the time. We’re using different graphics settings, different test methods, a different OS, and much different test hardware. We’ve also improved our testing accuracy significantly, and so it’s time to take all of this new hardware and knowledge and re-apply it to the GTX 960 2GB vs. 4GB debate, looking into whether there was really a “longevity” argument to be made.

ASUS grew impatient waiting for Samsung to reach volume production on its 32GB DDR4 UDIMMs, and so the company instead designed a new double capacity DIMM standard. This isn’t a JEDEC standard, but is a standard that has gotten some attention from ZADAK and GSkill, both of whom have made some of the tallest memory modules the world has seen. These DIMMs are 32GB per stick, so two of them give us 64GB at 3200MHz and, after overclocking effort, some pretty good timings. Two of these sticks would cost you about $1000, with the 3600MHz options at $1300. Today, we’ll be looking into when they can be used and how well they overclock.

These are double-capacity DIMMs, achieved by making the PCB significantly taller than ordinary RAM. More memory fits on a single stick, making it theoretically possible to approach the max of the CPU’s memory controller. This is difficult to do, as signal integrity starts to become threatened as the PCB grows larger and more complex.

This is an exciting milestone for us: We’ve completely overhauled our CPU testing methodology for 2019, something we first detailed in our GamersNexus 2019 Roadmap video. New testing includes more games than before, tested at two resolutions, alongside workstation benchmarks. These are new for us, but we’ve added program compile workloads, Adobe Premiere, Photoshop, compression and decompression, V-Ray, and more. Today is the unveiling of half of our new testing methodology, with the games getting unveiled separately. We’re starting with a small list of popular CPUs and will add as we go.

We don’t yet have a “full” list of CPUs, naturally, as this is a pilot of our new testing procedures for workstation benchmarks. As new CPUs launch, we’ll continue adding their most immediate competitors (and the new CPUs themselves) to our list of tested devices. We’ve had a lot of requests to add some specific testing to our CPU suite, like program compile testing, and today marks our delivery of those requests. We understand that many of you have other requests still awaiting fulfillment, and want you to know that, as long as you tweet them at us or post them on YouTube, there is a good chance we see them. It takes us about 6 months to a year to change our testing methodology, as we try to stick with a very trustworthy set of tests before introducing potential new variables. This test suite has gone through a few months of validation, so it’s time to try it out in the real world.

This GN Special Report looks at years of sales data from which CPUs our viewers and readers have purchased. The focus is our audience, and so we’re looking at Intel versus AMD sales volume and, to some extent, marketshare in the enthusiast segment of GN content consumers. Our data looks at average selling price (or ASP) of CPUs, the most popular CPU models and change over a 3.5-year period, and the overall sales volume between Intel and AMD across 4Q16 to 1Q19.

AMD has undoubtedly gained marketshare over the past two years. Multiple factors have aligned for AMD, the most obvious of which is its own architectural innovation with the Zen family of processors. Secondary to this, Intel’s inability to keep up with 14nm demand has crippled its DIY processor availability, with a third hit to Intel being its unexpected and continual delays to 10nm process. It was the perfect storm for AMD: Just one of these things would have helped, but all three together have allowed the company to claw itself back from functionally zero sales volume in the DIY enthusiast space.

This is the article version of our recent tour of a cable factory in Dongguan, China. The factory is SanDian, used by Cooler Master (and other companies you know) to manufacture front panel connectors, USB cables, Type-C cables, and more. This script was written for the video that's embedded below, but we have also pulled screenshots to make a written version. Note that references to "on screen" will be referring to the video portion.

USB 3.1 Type-C front panel cables are between 4x and 10x more expensive than USB2.0 front panel cables, which explains why Type-C is still somewhat rare in PC cases. For USB 3.1 Gen2 Type-C connectors with fully validated speeds, the cost is about 7x as expensive as the original USB3.0 cables. That cost is all because of how the cables are made: Raw materials have an expense, but there’s also tremendous time expense to manufacture and assemble USB 3.1 Type-C cables. Today’s tour of SanDian, a cable factory that partners with Cooler Master, shows how cables are made. This includes USB 3.1 Type-C, USB 2.0, and front panel connectors. Note that USB 3.1 is being rebranded to USB 3.2 going forward, but it’s the same process.

Our initial AMD Radeon VII liquid cooling mod was modified after the coverage went live. We ended up switching to a Thermaltake Floe 360 radiator (with different fans) due to uneven contact and manufacturing defects in the Alphacool GPX coldplate. Going with the Asetek cooler worked much better, dropping our thermals significantly and allowing increased overclocking and stock boosting headroom. The new drivers (19.2.3) also fixed most of the overclocking defects we originally found, making it possible to actually progress with this mod.

As an important foreword, note that overclocking with AMD’s drivers must be validated with performance at every step of the way. Configured frequencies are not the same as actual frequencies, so you might type “2030MHz” for core and get, for instance, 1950-2000MHz out. For this reason, and because frequency regularly misreports (e.g. “16000MHz”), it is critical that any overclock be validated with performance. Without validation, some “overclocks” can actually be bringing performance below stock while appearing to be boosted in frequency. This is very important for overclocking Radeon VII properly.

We recently revisited the AMD R9 290X from October of 2013, and now it’s time to look back at the GTX 780 Ti from November of 2013. The 780 Ti shipped for $700 MSRP and landed as NVIDIA’s flagship against AMD’s freshly-launched flagship. It was a different era: Memory capacity was limited to 3GB on the 780 Ti, memory frequency was a blazing 7Gbps, and core clock was 875MHz stock or 928MHz boost, using the old Boost 2.0 algorithm that kept a fixed clock in gaming. Overclocking was also more extensible, giving us a bigger upward punch than modern NVIDIA overclocking might permit. Our overclocks on the 780 Ti reference (with fan set to 93%) allowed it to exceed expected performance of the average partner model board, so we have a fairly full range of performance on the 780 Ti.

NVIDIA’s architecture has undergone significant changes since Kepler and the 780 Ti, one of which has been a change in CUDA core efficiency. When NVIDIA moved from Kepler to Maxwell, there was nearly a 40% efficiency gain when CUDA cores are processing input. A 1:1 Maxwell versus Kepler comparison, were such a thing possible, would position Maxwell as superior in efficiency and performance-per-watt, if not just outright performance. It is no surprise then that the 780 Ti’s 2880 CUDA cores, although high even by today’s standards (an RTX 2060 has 1920, but outperforms the 780 Ti), will underperform when compared to modern architectures. This is amplified by significant memory changes, capacity being the most notable, where the GTX 780 Ti’s standard configuration was limited to 3GB and ~7Gbps GDDR5.

Apex Legends is one of the most-watched games right now and is among the top Battle Royale genre of games. Running on the Titanfall engine and with some revamped Titanfall assets, the game is a fast-paced FPS with relatively high poly count models and long view distances. For this reason, we’re benchmarking a series of GPUs to find the “best” video card for Apex Legends at each price category.

Our testing first included some discovery and research benchmarks, where we dug into various multiplayer zones and practice mode to try and find the most heavily loaded areas of the benchmark. We also unlocked FPS for this, so we aren’t going to bump against any 144FPS cap or limitation. This will help find which cards can play the game at max settings – or near-max, anyway.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge