Hardware Guides

We’re back for our annual “Best Of” series. We’ve already published a buyer’s guide for the Best CPUs of 2020 (for gaming, workstation tasks, video editing, and more), and now we’re back with the Best & Worst PC Cases of 2020. This coverage provides a flyby overview of the best cases we’ve reviewed or worked on in the past year, but keep in mind that cases don’t age like CPUs or GPUs -- many good cases from 2019 are still available, and in some instances, the pricing has improved. We’ll talk about some of those, too, like the Phanteks P400A.

Each case will be accompanied with a link to our review and to the product listings. We often earn a commission from the retailer (not from the manufacturer and not from your purchase) if you click on the links. This does not influence our decision to choose one case over another -- we’re choosing based on our empirical testing data from the last year or so.

We’ll embed a few charts occasionally, but to get the full charts, you’ll want to check the individual case reviews for each enclosure. The target audience for this piece is either people returning to PC building for the first time in a while -- those who might be out of the loop -- or people who haven’t had time to watch or read every single one of our case reviews over the past year. We don’t blame you, if so. 

Today is a round-up of the best airflow-focused cases currently out, which can also be tuned to be good acoustic performers by nature of unrestricted intakes. Over all the years that we’ve been doing case reviews, we’ve advocated for high airflow designs. That generally implies lots of mesh and lots of fans, like the classic Cooler Master HAF cases that adopted “high airflow” as a brand name. As those cases aged and optical drives fell out of favor, front panel designs became increasingly clean and minimalistic, and therefore increasingly closed-off. Now the tide has turned again, and in 2020, we have more airflow cases than we know what to do with. Today, we’ll be covering some of our top choices--this isn’t our yearly best-and-worst cases roundup, it’s just a selection of airflow-focused cases with good value. We have almost 300 rows of test data multiplied across about 7 sheets, so although we’ll be limiting ourselves to cases we’ve reviewed, that’s still a big list. As always, let us know if there’s another case we should check out in the comments below.

Hardware-accelerated GPU scheduling is a feature new to Microsoft’s May 2020 update, Windows 10 version 2004, and has now been supported by both NVIDIA and AMD via driver updates. This feature is not to be confused with DirectX 12 Ultimate, which was delivered in the same Windows update. Hardware-accelerated GPU scheduling is supported on Pascal and Turing cards from NVIDIA, as well as AMD’s 5600 and 5700 series of cards. In today’s content, we’ll first walk through what exactly this feature does and what it’s supposed to mean, then we’ll show some performance testing for how it behaviorally affects change.

Enabling hardware-accelerated GPU scheduling requires Windows 10 2004, a supported GPU, and the latest drivers for that GPU (NVIDIA version 451.48, AMD version 20.5.1 Beta). With those requirements satisfied, a switch labelled “Hardware-accelerated GPU Scheduling” should appear in the Windows 10 “Graphics Settings” menu, off by default. Enabling the feature requires a reboot. This switch is the only visible sign of the new feature.

It’s difficult to differentiate motherboards, at least from a marketing perspective. There are definitely better and worse boards, and you can check any of the roundups or reviews Buildzoid has produced for this channel for explanations as to why, but “better” doesn’t mean “higher FPS in games” here. Using higher-quality or more expensive components doesn’t always translate directly into running Fortnite at a higher framerate, which makes it harder to communicate to consumers why they should spend $200 on board X instead of $100 on board Y if both can run the same CPUs. This has led to motherboard manufacturers playing games with numbers for boost duration, voltages, BCLK, and other settings in order to differentiate their boards from the competition with tangible performance increases.

We’ve talked about Intel turbo and “Multi-Core Enhancement” many, many times in the past. This serves as a companion piece to the most recent of these, our “Intel i9-10900K ‘High’ Power Consumption Explained” video. To reiterate, Intel’s specification defines turbo limits--the multipliers for boosting on one core, two cores, etc, all the way up to an all-core turbo boost. Here are some examples from Coffee Lake’s launch (8700K) and before:

With the new influx of CPUs from AMD and Intel, and more rumored on the horizon, we wanted to round-up all of our recent testing into one concise piece for people looking for recommendations on the best CPU for different tasks. We’ve published several hours’ worth of content in the form of reviews, tuning, and follow-up coverage, so if you want the full details and depth for anything check those pieces. We’ll be focusing more on firm recommendations for each category in this video and less on the deeper details, with our categories including: Best gaming CPU, best budget gaming CPU, best small business or hobbyist production CPU, best workstation CPU, best overall, most fun to overclock, and most disappointing.

In this content, we’re going to be breaking-down the AMD B550 vs. X570, B450, X470, X370, and A320 chipset specifications number-by-number. Our goal is to look at this purely from a facts-based angle of what the differences are, and those differences will include both numerical specification differences (number and type of lanes afforded) and forward or backwards compatibility differences. This includes the intent of the 500-series chipsets to support Zen 3 architecture (reminder: that’s not the same as Ryzen 4000 mobile, nor is it the same as Ryzen 3000 desktop), while the existing B450 and X470 boards are left to cap-out at Ryzen 3000 series (Zen 2) parts.

We have some additional discussion of the basics of naming, including CPU naming distinctions, in our video component that accompanies this article. You may get more information on the differences between AMD Zen generations and Ryzen generations in that content.

It’s time again for our CPU testing methodology to be updated, alongside the test bench. We’ve done some significant streamlining behind the scenes that make these tests easier to run and the results easier and more accurate to process, but on the public side, we’ve completely overhauled the software suite we’re using. Last time we updated our testing methodology, we added a code compile benchmark that was short-lived. The test featured GCC, Cygwin, some other environments, and ended up being a top-to-bottom sort by cache. We ditched that test (and consulted Wendell of Level1 Techs on it in this video), and we’re just now replacing it. New code compile benchmarking (with more usefulness) has been added for 2020, alongside the addition of Handbrake H.264 to H.265 transcoding (ranked by time), updated Adobe Premiere video rendering and Adobe Photoshop benchmarks, updated file compression and decompression benchmarks, and more. Gaming gets a total overhaul, too, with a big suite of new games added.

Additionally, we’ve updated several existing game and production benchmarks from last year’s suite, with a few left unchanged. This is to keep producing data that we can still compare to old data, which is useful for rapid analysis of parts that may not have been re-tested in the current year. For example, if we were testing a 10700K and wanted to reference its performance vs. a 2600K, but didn’t have a fresh retest, we could reference data from GTA V, Tomb Raider, Civilization, and ACO to form an understanding without fully retesting. We try to limit this, but time often gets the better of us, and it’s good to have reference points to ensure ongoing accuracy.

ATX12VO is a new-ish power supply spec published by Intel in July of 2019 that eliminates the 3.3V and 5V rails from power supplies, leaving only the 12V rail. The spec has become a hot buzzword lately because Tier 2 of the California Energy Commision’s Title 20 goes into effect on July 1st, 2021, and these stricter energy regulations were a large part of why the ATX12VO spec was written. We’ve spoken to Intel, a major power supply manufacturer, and a power supply factory on the subject, the latter two off-record, and today we’ll be reporting their thoughts. We’ll also be defining the ATX12VO spec and what it means for computing, along with Intel’s goals for the specification.

EATX is bullshit wannabe half-specification, not a real form factor. At least, not the way it’s being treated right now. It doesn’t mean anything. The name “EATX” implies a standard, but it’s not a standard, it’s a free-for-all. That’s not even getting into EE-ATX, or Enhanced Extended Advanced Technology eXtended, which is actually a name. Things would be a lot easier for everyone if motherboard manufacturers stuck to the dimensions of SSI-EEB without trying to wedge custom form factors in between, or correctly referred to 12”x10.5” boards as SSI-CEB, but that’d require actually trying to follow a spec. Then case manufacturers would have no reason to write “EATX (up to 11 inches)” in every single spec sheet for normal-sized mid towers, and customers would know at a glance exactly what they were getting. We’ve had a hell of a time lately trying to find cases that fit our “E-ATX” motherboards, which range in size from “basically ATX” to “doesn’t fit in any case that says it supports E-ATX, but is still called E-ATX.” We took that frustration and dug into the matter.

Other than technical discussion, we’ll also get the fun of unrolling the acronyms used everywhere in the industry, and talking about how stupid form factors like XL-ATX have three different sizes despite having one name, or how E-ATX has been split into “True E-ATX” and “Full Size E-ATX,” which also don’t mean anything to anyone.

The biggest rule in testing coolers is to never trust anything: Don’t trust the numbers, don’t trust the software, don’t trust firmware, and don’t trust the test bench. Every step of the way is a trap lying in wait to sabotage data accuracy. We’ve spent the last 3 years refining our liquid cooler bench and the last 6 months refining our new testing that will feature air coolers and liquid coolers alike. With millions of cells of data, we now know enough to have identified nearly every hidden pitfall in testing and finally feel confident in providing a full picture for accurate CPU cooler performance. The downside is that we’ll never trust anyone else’s numbers again, but the upside is that we can finally start really collecting data. This dissertation will be on the most common and the most obscure landmines for testing, laying a plan for our CPU cooler reviews and helping establish a baseline for quality and data accuracy. We promised a CPU air cooler round-up back at the end of 2016 or 2017, and we’re finally getting around to it and will be publishing a lot of cooler content over the next month or so. We’ll start with an A500 review after this methodology piece goes live, then we’ll break for our factory tour series, then we’ll be back to coolers.

This content is detailed and specific to CPU cooler testing methodology and processes. We will be using this as a reference piece for years, as it will establish testing practices to ensure accurate data. Most data out there regarding CPU coolers is flawed in some way or another, especially the stuff posted in random reddit comments, but the trick is minimizing flaws to the extent possible while remaining real-world, because total elimination of variables and pitfalls is impossible on PC hardware. Users will often randomly post a temperature number and say something like, “my Spire is at 70 degrees,” as if that actually means anything to anyone. Temperature isn’t a 3DMark score – it is completely dependent on each configuration, and so unless you’re looking at relative performance by swapping coolers in a controlled environment, you’re not really providing useful data to the discussion.

In this content, we’re going to show you 6 months of rigorous testing adventures that we’ve embarked on, including several months’ worth of discovering flaws in testing, common and uncommon errors, and bad data that could invalidate most reviews without the reviewer ever even knowing. We know because we’ve spent months catching them, hence our long wait time on publishing this content. Several of these issues will exist in other reviewer configurations without technician knowledge, but the trick is to have the right tools to flag errant testing. These concepts will range from extremely basic to advanced. We wanted to skip some basics, but realized that there’s so much bad information out there that we’d better just cover it all in one big dissertation.

Page 1 of 22

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge