Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"
First world problems, Steve. First world problems.
Today’s review has been the most-requested review from our commenters for about 6 months now, and it’s not even a piece of silicon. The Arctic Liquid Freezer II series has gotten heavy community interest because of high reported performance in the enthusiast forum userbase. We wanted to look at it with our new testing methodology that we’ve spent six months revising to see how the Liquid Freezer performs against incumbents, including the NZXT Kraken X62 (similar to the X63 we reviewed), the Noctua NH-D15, and a growing list of others. The Liquid Freezer’s biggest marketing point, currently wedged in between being a gimmick and a useful feature, is its included VRM fan on the coldplate housing. Our review includes benchmarks of VRM thermal performance with and without this fan, tested in A/B fashion, and also tests of surface levelness, CPU core thermals on the 3950X and 3800X at 120W and 200W, noise tests, and time-to-max temperature.
The biggest rule in testing coolers is to never trust anything: Don’t trust the numbers, don’t trust the software, don’t trust firmware, and don’t trust the test bench. Every step of the way is a trap lying in wait to sabotage data accuracy. We’ve spent the last 3 years refining our liquid cooler bench and the last 6 months refining our new testing that will feature air coolers and liquid coolers alike. With millions of cells of data, we now know enough to have identified nearly every hidden pitfall in testing and finally feel confident in providing a full picture for accurate CPU cooler performance. The downside is that we’ll never trust anyone else’s numbers again, but the upside is that we can finally start really collecting data. This dissertation will be on the most common and the most obscure landmines for testing, laying a plan for our CPU cooler reviews and helping establish a baseline for quality and data accuracy. We promised a CPU air cooler round-up back at the end of 2016 or 2017, and we’re finally getting around to it and will be publishing a lot of cooler content over the next month or so. We’ll start with an A500 review after this methodology piece goes live, then we’ll break for our factory tour series, then we’ll be back to coolers.
This content is detailed and specific to CPU cooler testing methodology and processes. We will be using this as a reference piece for years, as it will establish testing practices to ensure accurate data. Most data out there regarding CPU coolers is flawed in some way or another, especially the stuff posted in random reddit comments, but the trick is minimizing flaws to the extent possible while remaining real-world, because total elimination of variables and pitfalls is impossible on PC hardware. Users will often randomly post a temperature number and say something like, “my Spire is at 70 degrees,” as if that actually means anything to anyone. Temperature isn’t a 3DMark score – it is completely dependent on each configuration, and so unless you’re looking at relative performance by swapping coolers in a controlled environment, you’re not really providing useful data to the discussion.
In this content, we’re going to show you 6 months of rigorous testing adventures that we’ve embarked on, including several months’ worth of discovering flaws in testing, common and uncommon errors, and bad data that could invalidate most reviews without the reviewer ever even knowing. We know because we’ve spent months catching them, hence our long wait time on publishing this content. Several of these issues will exist in other reviewer configurations without technician knowledge, but the trick is to have the right tools to flag errant testing. These concepts will range from extremely basic to advanced. We wanted to skip some basics, but realized that there’s so much bad information out there that we’d better just cover it all in one big dissertation.
The AMD RX 5600 XT Jebaited Edition video cards launched yesterday, and the company created a mess by completely changing what the video card was meant to do before launch. Basically, it initially shipped as more of a 1660 Super competitor, but ended up being overhauled to become a 2060 competitor. This is overall a good thing from a price competition standpoint, but a horrible mess for buyers and manufacturers of the cards. The update came in the form of a VBIOS flash that can increase performance upwards of 11%, but not all the shipped cards have the VBIOS applied, meaning customers will be buying cards that perform worse than what reviews show. Worse still, some cards will never have that VBIOS available, with some partners splitting their 5600 XT into two SKUs. It’d sort of be like if the 1660 and 1660 Super were sold under a single name, but with two completely different performance classes. In today’s content, we’re going to help you flash 5600 XT cards to unlock the full performance, assuming your card has made such a VBIOS available. This will also apply to other AMD video cards.
This isn’t a revisit of the old AMD Ryzen 5 1600 – it’s a review of the new variant, named the AMD Ryzen 5 1600 “AF” by the community, dubbed as such for its SKU change from AE to AF. The AMD R5 1600 AF is a brand new CPU with an old, old name from 2017. It’s mostly an R5 2600, in that it’s a slower variant of the Zen+ CPU from the 2000-series, but with a 1000-series name. AMD silently released the 1600 AF as an $85 option, but it’s on 12nm instead of 14nm and carries other 2nd-Gen Ryzen features. In today’s review of the new $85 processor, we’ll look at performance versus the original R5 1600, the R5 2600, and overclocking performance, since a 12nm 1600 AF should do about the same OC as a 12nm Ryzen 2000 part, which were typically 100-200MHz higher than the 1000-series.
The R5 1600 AF is a weird, weird refresh. It’s mostly odd that AMD didn’t just name it Ryzen 3 3300X or Ryzen 5 3550. They already have the 3000 family with Zen+ architecture and the 3000G with Zen1 architecture, so it wouldn’t dilute the naming and it’d be a much more successful, higher selling product with a lot of media fanfare. Instead, it just sounds like a two-year-old part, but it’s really not. We can’t fault AMD for its naming and it doesn’t particularly bother us, it’s just a bit odd from a marketing standpoint. Maybe AMD doesn’t want to sell a lot of these.
Back when Ryzen 3000 launched, there was reasonable speculation founded in basic physics that the asymmetrical die arrangement of the CPUs with fewer chiplets could have implications for cooler performance. The idea was that, at the root of it, a cooler whose heatpipes aligned to fully contact above the die would perform better, as opposed to one with two coolers sharing vertical contact with the die. We still see a lot of online commentary about this and some threads about which orientation of a cooler is “best,” so we thought we’d bust a few of the myths that popped-up, but also do some testing on the base idea.
This is pretty old news by now, with much of the original discussion starting about two months ago. Noctua revived the issue at the end of October by stating that it believed there to be no meaningful impact between the two possible orientations of heatpipes on AM4 motherboards, but not everyone has seen that, because we’re still getting weekly emails asking us to test this hypothesis.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.