Silicon quality and the so-called silicon lottery are often discussed in the industry, but it’s rare for anyone to have enough sample size to actually demonstrate what those phrases mean in practice. We asked Gigabyte to loan us as many of a single model of video card as they could so that we could demonstrate the frequency variance card-to-card at stock, the variations in overclocking headroom, and actual gaming performance differences from one card to the next. This helps to more definitively strike at the question of how much silicon quality can impact a GPU’s performance, particularly when stock, and also looks at memory overclocking and range of FPS in gaming benchmarks with a highly controlled bench and a ton of test passes per device. Finally, we can see the theory of how much one reviewer’s GPU might vary from another’s when running initial review testing.
AMD’s biggest ally for the RTX launch was NVIDIA, as the company crippled itself with unimplemented features and generational price creep out the gate. With RTX Super, NVIDIA demonstrates that it has gotten the memo from the community and has driven down its pricing while increasing performance. Parts of the current RTX line will be phased-out, with the newer, better parts coming into play and pre-empting AMD’s Navi launch. The 2070 Super is priced at $500, $50 above the 5700 XT, and the on-paper specs should put it about equivalent with an RTX 2080 in performance; it’s even using the TU-104 RTX 2080 die, further reinforcing this likely position. The 2060 Super sees a better bin with more unlocked SMs on the GPU, improving compute capabilities and framebuffer capacity beyond the initial 2060. Both of these things spell an embarrassing scenario about to unfold for AMD’s Radeon VII card, but we’ll have to wait another week to see how it plays-out for the yet unreleased Navi RX cards. There may be hope yet for AMD’s new lineup, but the existing lineup will face existential challenges from the freshly priced and face-lifted RTX Super cards. Today, we’re reviewing the new RTX Super cards with a fully revamped GPU testing methodology.
The first question is this: Why the name “Super?” Well, we asked, and nobody knows the answer. Some say that Jensen burned an effigy of a leather jacket and the word “SUPER” appeared in the toxic fumes above, others say that it’s a self-contained acronym. All we know is that it’s called “Super.”
Rather than pushing out Ti updates that co-exist with the original SKUs, NVIDIA is replacing the 2080 and 2070 SKUs with the new Super SKUs, while keeping the 2060 at $350. This is a “damned if you do, damned if you don’t” scenario. By pushing this update, NVIDIA shows that it’s listening – either to consumers or to competition – by bringing higher performing parts to lower price categories. At the same time, people who recently bought RTX cards may feel burned or feel buyer’s remorse. This isn’t just a price cut, which is common, but a fundamental change to the hardware. The RTX 2070 Super uses TU104 for the GPU rather than TU106, bumping it to a modified 2080 status. The 2060 stays on TU106, but also sees changes to SMs active and memory capacity.
Leading into the busiest hardware launch week of our careers, we talk about Intel's internal competitive analysis document leaking, DisplayPort 2.0 specifications being detailed, and Ubuntu dropping and re-adding 32-bit support. We also follow-up on Huawei news (and how Microsoft and Intel are still supporting it) and trade tensions.
Show notes continue after the embedded video.
Hardware news this week has a few more AMD rumors -- one of which we're debunking (X590) and another we're re-highlighting (B550) -- with additional news coverage of the US tariffs and impact to consumer pricing. On the topic of pricing, aside from an overall increase as a result of tariffs, Intel has expressed interest to reduce its desktop CPU prices by 10-15% with the launch of Ryzen.
The show notes continue below the embedded video.
Most of last week's hardware news revolved around AMD and its Navi and Ryzen product disclosures from the tech day, but plenty still happened during E3 week: Microsoft, for instance, announced its Scarlett console and rediscovered virtual memory, Comcast was caught violating the Consumer Protection Act over 445,000 times, USB 4.0 got lightly detailed for a 2020 launch, and more.
Show notes after the video embed.
AMD’s X570 chipset marks the arrival of some technology that was first deployed on Epyc, although that was done through the CPU as there isn’t a traditional chipset. With the shift to PCIe 4, X570 motherboards have grown more complex than X370 and X470, furthered by difficulties cooling the higher power consumption of X570. All of these changes mean that it’s time to compare the differences between X370, X470, and X570 motherboard chipsets, hopefully helping newcomers to Ryzen understand the changes.
The persistence of AMD’s AM4 socket, still slated for life through 2020, means that new CPUs are compatible with older chipsets (provided the motherboard makers update BIOS for detection). It also means that older CPUs (like the reduced price R5 2600X) are compatible with new motherboards, if you for some reason ended up with that combination. The only real downside, aside from potential cost of the latter option, is that new CPUs on old motherboards will mean no PCIe Gen4 support. AMD is disabling it in AGESA at launch, and unless a motherboard manufacturers finds the binary switch to flip in AGESA, it’ll be off for good. Realistically, this isn’t all that relevant: Most users will never touch the bandwidth of Gen4 for this round of products (in the future, maybe), and so the loss of running a new CPU on an old motherboard may be outweighed by the cost savings of keeping an already known-good board, provided the VRM is sufficient.
AMD’s technical press event bore information for both AMD Ryzen and AMD Navi, including overclocking information for Ryzen, Navi base, boost, and average clocks, architectural information and block diagrams, product-level specifications, and extreme overclocking information for Ryzen with liquid nitrogen. We understand both lines better now than before and can brief you on what AMD is working on. We’ll start with Navi specs, die size, and top-level architectural information, then move on to Ryzen. AMD also talked about ray tracing during its tech day, throwing some casual shade at NVIDIA in so doing, and we’ll also cover that here.
First, note that AMD did not give pricing to the press ahead of its livestream at E3, so this content will be live right around when the prices are announced. We’ll try to update with pricing information as soon as we see it, although we anticipate our video’s comments section will have the information immediately. UPDATE: Prices are $450 for the RX 5700 XT, $380 for the RX 5700.
AMD’s press event yielded a ton of interesting, useful information, especially on the architecture side. There was some marketing screwery in there, but a surprisingly low amount for this type of event. The biggest example was taking a thermographic image of two heatsinks to try and show comparative CPU temperature, even though the range was 23 to 27 degrees, which makes the delta look astronomically large despite being in common measurement error. Also, the heatsink actually should be hot because that means it’s working, and taking a thermographic image of a shiny metal object means you’re more showing reflected room temperature or encountering issues with emissivity, and ultimately they should just be showing junction temperature, anyway. This was our only major gripe with the event -- otherwise, the information was technical, detailed, and generally free of marketing BS. Not completely free of it, but mostly. The biggest issue with the comparison was the 28-degree result that exited the already silly 23-27 degree range, making it look like 28 degrees was somehow massively overheating.
Let’s start with the GPU side.
As we board another plane, just five days since landing home from Taipei, we're recapping news leading into next week's E3 event, positioned exhaustingly close to Computex. This recap talks AMD and Samsung partnerships on GPUs, Apple's $1000 monitor stand and accompanying cheese grater, and the Radeon Vega II dual-GPUs located therein. We also talk tariff impact on pricing in PC hardware and, as an exclusive story for the video version, we talk about the fake "X499" motherboard at Computex 2019.
Show notes below the video embed.
As we’ve been inundated with Computex 2019 coverage, this HW News episode will focus on some of the smaller news items that have slipped through the cracks, so to speak. It’s mostly a helping of smaller hardware announcements from big vendors like Corsair, NZXT, and SteelSeries, with a side of the usual industry news.
Be sure to stay tuned to our YouTube channel for Computex 2019 news.
Our viewers have long requested that we add standardized case fan placement testing in our PC case reviews. We’ve previously talked about why this is difficult – largely logistically, as it’s neither free in cost nor free in time – but we are finally in a good position to add the testing. The tests, we think, clearly must offer some value, because it is one of our most-requested test items over the past two years. We ultimately want to act on community interests and explore what the audience is curious about, and so we’ve added tests for standardized case fan benchmarking and for noise normalized thermal testing.
Normalizing for noise and running thermal tests has been our main, go-to benchmark for PC cooler testing for about 2-3 years now, and we’ve grown to really appreciate the approach to benchmarking. Coolers are simpler than cases, as there’s not really much in the way of “fan placement,” and normalizing for a 40dBA level has allowed us to determine which coolers have the most efficient means of cooling when under identical noise conditions. As we’ve shown in our cooler reviews, this bypasses the issue where a cooler with significantly higher RPM always chart-tops. It’s not exactly fair if a cooler at 60dBA “wins” the thermal charts versus a bunch of coolers at, say, 35-40dBA, and so normalizing the noise level allows us to see if any proper differences emerge when the user is subjected to the same “volume” from their PC cooling products. We have also long used these for GPU cooler reviews. It’s time to introduce it to case reviews, we think, and we’ll be doing that by sticking with the stock case fan configuration and reducing case fan RPMs equally to meet the target noise level (CPU and GPU cooler fans remain unchanged, as these most heavily dictate CPU and GPU coolers; they are fixed speeds constantly).
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.