EATX is bullshit wannabe half-specification, not a real form factor. At least, not the way it’s being treated right now. It doesn’t mean anything. The name “EATX” implies a standard, but it’s not a standard, it’s a free-for-all. That’s not even getting into EE-ATX, or Enhanced Extended Advanced Technology eXtended, which is actually a name. Things would be a lot easier for everyone if motherboard manufacturers stuck to the dimensions of SSI-EEB without trying to wedge custom form factors in between, or correctly referred to 12”x10.5” boards as SSI-CEB, but that’d require actually trying to follow a spec. Then case manufacturers would have no reason to write “EATX (up to 11 inches)” in every single spec sheet for normal-sized mid towers, and customers would know at a glance exactly what they were getting. We’ve had a hell of a time lately trying to find cases that fit our “E-ATX” motherboards, which range in size from “basically ATX” to “doesn’t fit in any case that says it supports E-ATX, but is still called E-ATX.” We took that frustration and dug into the matter.
Other than technical discussion, we’ll also get the fun of unrolling the acronyms used everywhere in the industry, and talking about how stupid form factors like XL-ATX have three different sizes despite having one name, or how E-ATX has been split into “True E-ATX” and “Full Size E-ATX,” which also don’t mean anything to anyone.
The biggest rule in testing coolers is to never trust anything: Don’t trust the numbers, don’t trust the software, don’t trust firmware, and don’t trust the test bench. Every step of the way is a trap lying in wait to sabotage data accuracy. We’ve spent the last 3 years refining our liquid cooler bench and the last 6 months refining our new testing that will feature air coolers and liquid coolers alike. With millions of cells of data, we now know enough to have identified nearly every hidden pitfall in testing and finally feel confident in providing a full picture for accurate CPU cooler performance. The downside is that we’ll never trust anyone else’s numbers again, but the upside is that we can finally start really collecting data. This dissertation will be on the most common and the most obscure landmines for testing, laying a plan for our CPU cooler reviews and helping establish a baseline for quality and data accuracy. We promised a CPU air cooler round-up back at the end of 2016 or 2017, and we’re finally getting around to it and will be publishing a lot of cooler content over the next month or so. We’ll start with an A500 review after this methodology piece goes live, then we’ll break for our factory tour series, then we’ll be back to coolers.
This content is detailed and specific to CPU cooler testing methodology and processes. We will be using this as a reference piece for years, as it will establish testing practices to ensure accurate data. Most data out there regarding CPU coolers is flawed in some way or another, especially the stuff posted in random reddit comments, but the trick is minimizing flaws to the extent possible while remaining real-world, because total elimination of variables and pitfalls is impossible on PC hardware. Users will often randomly post a temperature number and say something like, “my Spire is at 70 degrees,” as if that actually means anything to anyone. Temperature isn’t a 3DMark score – it is completely dependent on each configuration, and so unless you’re looking at relative performance by swapping coolers in a controlled environment, you’re not really providing useful data to the discussion.
In this content, we’re going to show you 6 months of rigorous testing adventures that we’ve embarked on, including several months’ worth of discovering flaws in testing, common and uncommon errors, and bad data that could invalidate most reviews without the reviewer ever even knowing. We know because we’ve spent months catching them, hence our long wait time on publishing this content. Several of these issues will exist in other reviewer configurations without technician knowledge, but the trick is to have the right tools to flag errant testing. These concepts will range from extremely basic to advanced. We wanted to skip some basics, but realized that there’s so much bad information out there that we’d better just cover it all in one big dissertation.
The AMD RX 5600 XT Jebaited Edition video cards launched yesterday, and the company created a mess by completely changing what the video card was meant to do before launch. Basically, it initially shipped as more of a 1660 Super competitor, but ended up being overhauled to become a 2060 competitor. This is overall a good thing from a price competition standpoint, but a horrible mess for buyers and manufacturers of the cards. The update came in the form of a VBIOS flash that can increase performance upwards of 11%, but not all the shipped cards have the VBIOS applied, meaning customers will be buying cards that perform worse than what reviews show. Worse still, some cards will never have that VBIOS available, with some partners splitting their 5600 XT into two SKUs. It’d sort of be like if the 1660 and 1660 Super were sold under a single name, but with two completely different performance classes. In today’s content, we’re going to help you flash 5600 XT cards to unlock the full performance, assuming your card has made such a VBIOS available. This will also apply to other AMD video cards.
Back when Ryzen 3000 launched, there was reasonable speculation founded in basic physics that the asymmetrical die arrangement of the CPUs with fewer chiplets could have implications for cooler performance. The idea was that, at the root of it, a cooler whose heatpipes aligned to fully contact above the die would perform better, as opposed to one with two coolers sharing vertical contact with the die. We still see a lot of online commentary about this and some threads about which orientation of a cooler is “best,” so we thought we’d bust a few of the myths that popped-up, but also do some testing on the base idea.
This is pretty old news by now, with much of the original discussion starting about two months ago. Noctua revived the issue at the end of October by stating that it believed there to be no meaningful impact between the two possible orientations of heatpipes on AM4 motherboards, but not everyone has seen that, because we’re still getting weekly emails asking us to test this hypothesis.
Our latest GN Special Report is looking at sales data to determine the popularity of both AMD and Intel CPUs amongst our readers, with dive-down data on average selling price, popularity by series (R5, R7, R9, or i7, i9, and so on), and Intel vs. AMD monthly sales volume. We ran a similar report in April of this year, but with Ryzen 3000 behind us, we now have a lot more data to look at. We’ll be comparing 3 full years of affiliate purchases through retail partners to analyze product popularity among the GamersNexus readers and viewers.
This year’s busy launch cadence has meant nearly non-stop CPU and GPU reviews for the past 6 months, but that also gives us a lot of renewed data to work with for market analysis. Intel’s supply troubles have been nearly a weekly news item for us throughout this year, with a few months of reprieve that soon lapsed. With Intel’s ongoing supply shortages and 10nm delays, and with its only launch being refreshes of existing parts, the company was barely present in the enthusiast segment for 2019. Even still, it’s dominating in pre-built computer sales, and ultimately, DIY enthusiast is an incredibly small portion of Intel’s total marketshare and volume. AMD, meanwhile, has had back-to-back launches in rapid succession, which have managed to dominate media coverage for the better part of this year.
We return again to our annual Awards Show series, where we recap a year’s worth of content to distill-down our opinions on the best-of-the-best hardware that we’ve tested. We also like to talk about some of the worst trends and biggest disappointments in these pieces, hopefully shaming some of the industry into doing better things next year. This episode focuses on the Best Gaming GPUs of 2019, with categories like Best Overall, Most Well-Rounded, Best Modding Support, Best Budget, and more. NVIDIA and AMD have flooded warehouse shelves with cards over the past 11 months, but it’s finally calming down and coming to a close. Time to recap the Best GPUs of 2019, with all links in the description below for each card.
We’ve already posted two of our end-of-year recaps, one for Best Cases of 2019, the other for Best CPUs of 2019, and now we’re back with Best GPUs. As a reminder for this content type, our focus is to help people building systems with a quick-reference recap for a year’s worth of testing. We’re not going to deep-dive with a data-centric approach, but rather quickly cover the stack in a quicker fashion. If you want deep-dive analytics or test data for any one of these devices, check our reviews throughout the year. Note also that, although we will talk about partner models a bit, the “Best X” coverage will focus on the GPU itself (made by AMD or NVIDIA). For our most recent partner recap, check out our “Best RX 5700 XT” coverage.
It’s that time of year again where we decide which case manufacturers deserve our praise and a GN Teardown Crystal, and which deserve eternal shame and have to pay $19.99 for their own Teardown Crystal from store.gamersnexus.net. Last year, the Lian Li O11 Dynamic took the prize for Best All-Around, and the Silverstone PM02 and Fractal Define S2 took home the “Best Worst” Trend award for the unforgivable sin of being pointless refreshes. Also, the PM02 is just a bad case. This year’s award nominees pick up from where we left off, starting with the lackluster Thermaltake Level 20 MT in December of 2018. Spoilers: it didn’t win anything.
With over 220 rows of case data now -- or maybe more, we haven’t really checked too recently -- there’s a lot to consider in our round-up of the best cases for 2019. Fortunately, that list instantly gets whittled-down to, well, just 2019’s data, which is still populous. With the prevalence of several bad cases this year, we can narrow the list further to focus on only the most deserving of recognition. This article will continue after the embedded video.
Having reviewed over a dozen CPUs this year, it’s time to round-up the Best of 2019 with the first instalment of our annual GN Awards show. In this series, we’ll pick the best products for categories like performance, overall quality, gaming, overclocking, and more. Our goal today is to help you parse the best CPUs in each category so that you can pick the right parts for PC build purchases during Black Friday, Cyber Monday, and other holiday sales.
At the end of this content, one of the two companies will walk away with a GN Award Crystal for its efforts this year. Our award crystals are 3D laser-engraved glass cubes that feature a GN tear-down logo, replete with easter eggs like MOSFETs, inductors, VRMs, PCIe slots, fans, and even screws, all in 3D.
Thermal Design Power, or TDP, is a term used by AMD and Intel to refer in an extremely broad sense to the rate at which a CPU cooler must dissipate heat from the chip to allow it to perform as advertised. Sort of. Depending on the specific formula and product, this number often ends up a combination of science-y variables and voodoo mysticism, ultimately culminating in a figure that’s used to beat-down forum users over which processor has a lower advertised “TDP”. With the push of Ryzen 3000, we’re focusing today on how AMD defines TDP and what its formula actually breaks into, and how that differs from the way cooler manufacturers define it. Buying a 95W TDP processor and a 95W TDP CPU cooler doesn’t mean they’re perfectly matched, and TDP is a much looser calculation than most would expect. There’s also contention between cooler manufacturers and CPU manufacturers over how this should be accurately calculated versus calculated for marketing, something we’ll explore in today’s content.
This content comes from an earlier-published feature-length video we made. We don’t really make any profit on the articles, but maintain them anyway as a point of reference. If you’d like to support deep-dive, long-form content like this, please consider supporting us the following ways:
- Watching the video is a great way, but we know that you’re here because you prefer reading! It’s faster, after all
- Grabbing a GamersNexus GPU Disassembly Toolkit, an anti-static GN Modmat (available in medium & classic large), T-shirt, GPU anatomy poster, glassware, or other merch to support us
- Contributing to our efforts on Patreon
The article continues after the embedded video. Please note that some off-the-cuff/unscripted commentary will not be ported to the article, so you may miss on some commentary, but most of it is here.
Memory speed on Ryzen has always been a hot subject, with AMD’s 1000 and 2000 series CPUs responding favorably to fast memory while at the same time having difficulty getting past 3200MHz in Gen1. The new Ryzen 3000 chips officially support memory speeds up to 3200MHz and can reliably run kits up to 3600MHz, with extreme overclocks up to 5100MHz. For most people, this type of clock isn’t achievable, but frequencies in the range of 3200 to 4000MHz are done relatively easily, but then looser timings become a concern. Today, we’re benchmarking various memory kits at XMP settings, with Ryzen memory DRAM calculator, and with manual override overclocking. We’ll look at the trade-off of higher frequencies versus tighter timings to help establish the best memory solutions for Ryzen.
One of the biggest points to remember during all of this -- and any other memory testing published by other outlets -- is that motherboard matters almost more than the memory kit itself. Motherboards are responsible for most of the timings auto configured on memory kits, even when using XMP, as XMP can only store so much data per kit. The rest, including unsurfaced timings that the user never sees, are done during memory training by the motherboard. Motherboard manufacturers maintain a QVL (Qualified Vendor List) of kits tested and approved on each board, and we strongly encourage system builders to check these lists rather than just buying a random kit of memory. Motherboard makers will even tune timings for some kits, so there’s potentially a lot of performance lost by using mismatched boards and memory.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.