We're still in Taiwan this week for factory tours, but that's given us a unique perspective to get first-party information on how COVID-19 is impacting the computer hardware industry. In particular, we've been able to glean information on how companies in the US and Taiwan are handling risk mitigation and limiting spread of the virus in their companies. This has wider impact for consumers, as production will be limited over the next month or two and product delays are inevitable. There are also implications for Computex -- namely, whether it happens or not. In addition to this specific news, we have reporting on new AMD vulnerabilities, the death of the blower fan, and more.
We return again to our annual Awards Show series, where we recap a year’s worth of content to distill-down our opinions on the best-of-the-best hardware that we’ve tested. We also like to talk about some of the worst trends and biggest disappointments in these pieces, hopefully shaming some of the industry into doing better things next year. This episode focuses on the Best Gaming GPUs of 2019, with categories like Best Overall, Most Well-Rounded, Best Modding Support, Best Budget, and more. NVIDIA and AMD have flooded warehouse shelves with cards over the past 11 months, but it’s finally calming down and coming to a close. Time to recap the Best GPUs of 2019, with all links in the description below for each card.
We’ve already posted two of our end-of-year recaps, one for Best Cases of 2019, the other for Best CPUs of 2019, and now we’re back with Best GPUs. As a reminder for this content type, our focus is to help people building systems with a quick-reference recap for a year’s worth of testing. We’re not going to deep-dive with a data-centric approach, but rather quickly cover the stack in a quicker fashion. If you want deep-dive analytics or test data for any one of these devices, check our reviews throughout the year. Note also that, although we will talk about partner models a bit, the “Best X” coverage will focus on the GPU itself (made by AMD or NVIDIA). For our most recent partner recap, check out our “Best RX 5700 XT” coverage.
After a slight lapse in news coverage due to a crowded content schedule, we’re back this week with highlights from the last couple of weeks. The news beat has been somewhat sluggish as we settle into the fourth quarter and move ever closer to the unrepentant shopping season. The crowning news item is the arrival of AMD’s remaining 2019 CPUs, including the highly-anticipated 16C/32T Ryzen 9 3950X.
There’s also fresh news on AMD’s continued encroachment on Intel’s x86 market share, Seagate keeping HDD development alive, and Samsung ending its custom CPU designs. Elsewhere within GN, we’ve recently — and exhaustively — detailed CPU and GPU recommendations for Red Dead Redemption 2, as well as pursuing a 6GHz overclock on our i9-9900KS.
This week’s hardware news talks about NVIDIA’s reported revival of the RTX 2070, Intel’s ongoing 14nm shortage issues, AMD and Intel earnings reports, and more. Among the hardware items, GN also discusses its new ongoing partnership with the Eden Reforestation Projects to contribute 10 trees planted for each item sold via the GN store through November.
Show notes after the embedded video.
"Integer scaling" has been a buzz phrase for a few weeks now, with Intel first adding integer scaling support to its driver set, and now NVIDIA following. This week, we'll be talking more about what that even means (and where it's useful), news on AMD's RDNA whitepaper and CrossFire support, Intel's Comet Lake CPUs (and naming), and a few minor topics.
Show notes continue after the embedded video, as always.
AMD’s biggest ally for the RTX launch was NVIDIA, as the company crippled itself with unimplemented features and generational price creep out the gate. With RTX Super, NVIDIA demonstrates that it has gotten the memo from the community and has driven down its pricing while increasing performance. Parts of the current RTX line will be phased-out, with the newer, better parts coming into play and pre-empting AMD’s Navi launch. The 2070 Super is priced at $500, $50 above the 5700 XT, and the on-paper specs should put it about equivalent with an RTX 2080 in performance; it’s even using the TU-104 RTX 2080 die, further reinforcing this likely position. The 2060 Super sees a better bin with more unlocked SMs on the GPU, improving compute capabilities and framebuffer capacity beyond the initial 2060. Both of these things spell an embarrassing scenario about to unfold for AMD’s Radeon VII card, but we’ll have to wait another week to see how it plays-out for the yet unreleased Navi RX cards. There may be hope yet for AMD’s new lineup, but the existing lineup will face existential challenges from the freshly priced and face-lifted RTX Super cards. Today, we’re reviewing the new RTX Super cards with a fully revamped GPU testing methodology.
The first question is this: Why the name “Super?” Well, we asked, and nobody knows the answer. Some say that Jensen burned an effigy of a leather jacket and the word “SUPER” appeared in the toxic fumes above, others say that it’s a self-contained acronym. All we know is that it’s called “Super.”
Rather than pushing out Ti updates that co-exist with the original SKUs, NVIDIA is replacing the 2080 and 2070 SKUs with the new Super SKUs, while keeping the 2060 at $350. This is a “damned if you do, damned if you don’t” scenario. By pushing this update, NVIDIA shows that it’s listening – either to consumers or to competition – by bringing higher performing parts to lower price categories. At the same time, people who recently bought RTX cards may feel burned or feel buyer’s remorse. This isn’t just a price cut, which is common, but a fundamental change to the hardware. The RTX 2070 Super uses TU104 for the GPU rather than TU106, bumping it to a modified 2080 status. The 2060 stays on TU106, but also sees changes to SMs active and memory capacity.
As we’ve been inundated with Computex 2019 coverage, this HW News episode will focus on some of the smaller news items that have slipped through the cracks, so to speak. It’s mostly a helping of smaller hardware announcements from big vendors like Corsair, NZXT, and SteelSeries, with a side of the usual industry news.
Be sure to stay tuned to our YouTube channel for Computex 2019 news.
Computex is just a few weeks away. Mark calendars for May 28 to June 1 (and surrounding dates) -- we're expecting AMD Ryzen 3000 discussion, Navi unveils or teases, and X570 motherboards in the CPU category, with potential Intel news on 10nm for mobile and notebook devices. This week's news cycle was still busy pre-show, though, including discussion of an impending end to Intel's CPU shortage, the AMD supercomputer collaboration with Cray, NVIDIA's move away from binned GPUs, and more.
As always, the show notes are below the embedded video.
One of our most popular videos of yore talks about the GTX 960 4GB vs. GTX 960 2GB cards and the value of choosing one over the other. The discussion continues today, but is more focused on 3GB vs. 6GB comparisons, or 4GB vs. 8GB comparisons. Now, looking back at 2015’s GTX 960, we’re revisiting with locked frequencies to compare memory capacities. The goal is to look at both framerate and image quality to determine how well the 2GB card has aged versus how well the 4GB card has aged.
A lot of things have changed for us since our 2015 GTX 960 comparison, so these results will obviously not be directly comparable to the time. We’re using different graphics settings, different test methods, a different OS, and much different test hardware. We’ve also improved our testing accuracy significantly, and so it’s time to take all of this new hardware and knowledge and re-apply it to the GTX 960 2GB vs. 4GB debate, looking into whether there was really a “longevity” argument to be made.
NVIDIA’s GTX 1650 was sworn to secrecy, with drivers held for “unification” reasons up until actual launch date. The GTX 1650 comes in variants ranging from 75W to 90W and above, meaning that some options will run without a power connector while others will focus on boosted clocks, power target, and require a 6-pin connector. GTX 1650s start at $150, with this model costing $170 and running a higher power target, more overclocking headroom, and potentially better challenging some of NVIDIA’s past-generation products. We’ll see how far we can push the 1650 in today’s benchmarks, including overclock testing to look at maximum potential versus a GTX 1660. We’re using the official, unmodified GTX 1650 430.39 public driver from NVIDIA for this review.
We got our card two hours before product launch and got the drivers at launch, but noticed that NVIDIA tried to push drivers heavily through GeForce Experience. We pulled them standalone instead.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.