"Integer scaling" has been a buzz phrase for a few weeks now, with Intel first adding integer scaling support to its driver set, and now NVIDIA following. This week, we'll be talking more about what that even means (and where it's useful), news on AMD's RDNA whitepaper and CrossFire support, Intel's Comet Lake CPUs (and naming), and a few minor topics.

Show notes continue after the embedded video, as always.

AMD’s biggest ally for the RTX launch was NVIDIA, as the company crippled itself with unimplemented features and generational price creep out the gate. With RTX Super, NVIDIA demonstrates that it has gotten the memo from the community and has driven down its pricing while increasing performance. Parts of the current RTX line will be phased-out, with the newer, better parts coming into play and pre-empting AMD’s Navi launch. The 2070 Super is priced at $500, $50 above the 5700 XT, and the on-paper specs should put it about equivalent with an RTX 2080 in performance; it’s even using the TU-104 RTX 2080 die, further reinforcing this likely position. The 2060 Super sees a better bin with more unlocked SMs on the GPU, improving compute capabilities and framebuffer capacity beyond the initial 2060. Both of these things spell an embarrassing scenario about to unfold for AMD’s Radeon VII card, but we’ll have to wait another week to see how it plays-out for the yet unreleased Navi RX cards. There may be hope yet for AMD’s new lineup, but the existing lineup will face existential challenges from the freshly priced and face-lifted RTX Super cards. Today, we’re reviewing the new RTX Super cards with a fully revamped GPU testing methodology.

The first question is this: Why the name “Super?” Well, we asked, and nobody knows the answer. Some say that Jensen burned an effigy of a leather jacket and the word “SUPER” appeared in the toxic fumes above, others say that it’s a self-contained acronym. All we know is that it’s called “Super.”

Rather than pushing out Ti updates that co-exist with the original SKUs, NVIDIA is replacing the 2080 and 2070 SKUs with the new Super SKUs, while keeping the 2060 at $350. This is a “damned if you do, damned if you don’t” scenario. By pushing this update, NVIDIA shows that it’s listening – either to consumers or to competition – by bringing higher performing parts to lower price categories. At the same time, people who recently bought RTX cards may feel burned or feel buyer’s remorse. This isn’t just a price cut, which is common, but a fundamental change to the hardware. The RTX 2070 Super uses TU104 for the GPU rather than TU106, bumping it to a modified 2080 status. The 2060 stays on TU106, but also sees changes to SMs active and memory capacity.

As we’ve been inundated with Computex 2019 coverage, this HW News episode will focus on some of the smaller news items that have slipped through the cracks, so to speak. It’s mostly a helping of smaller hardware announcements from big vendors like Corsair, NZXT, and SteelSeries, with a side of the usual industry news.

Be sure to stay tuned to our YouTube channel for Computex 2019 news.

Computex is just a few weeks away. Mark calendars for May 28 to June 1 (and surrounding dates) -- we're expecting AMD Ryzen 3000 discussion, Navi unveils or teases, and X570 motherboards in the CPU category, with potential Intel news on 10nm for mobile and notebook devices. This week's news cycle was still busy pre-show, though, including discussion of an impending end to Intel's CPU shortage, the AMD supercomputer collaboration with Cray, NVIDIA's move away from binned GPUs, and more.

As always, the show notes are below the embedded video.

One of our most popular videos of yore talks about the GTX 960 4GB vs. GTX 960 2GB cards and the value of choosing one over the other. The discussion continues today, but is more focused on 3GB vs. 6GB comparisons, or 4GB vs. 8GB comparisons. Now, looking back at 2015’s GTX 960, we’re revisiting with locked frequencies to compare memory capacities. The goal is to look at both framerate and image quality to determine how well the 2GB card has aged versus how well the 4GB card has aged.

A lot of things have changed for us since our 2015 GTX 960 comparison, so these results will obviously not be directly comparable to the time. We’re using different graphics settings, different test methods, a different OS, and much different test hardware. We’ve also improved our testing accuracy significantly, and so it’s time to take all of this new hardware and knowledge and re-apply it to the GTX 960 2GB vs. 4GB debate, looking into whether there was really a “longevity” argument to be made.

NVIDIA’s GTX 1650 was sworn to secrecy, with drivers held for “unification” reasons up until actual launch date. The GTX 1650 comes in variants ranging from 75W to 90W and above, meaning that some options will run without a power connector while others will focus on boosted clocks, power target, and require a 6-pin connector. GTX 1650s start at $150, with this model costing $170 and running a higher power target, more overclocking headroom, and potentially better challenging some of NVIDIA’s past-generation products. We’ll see how far we can push the 1650 in today’s benchmarks, including overclock testing to look at maximum potential versus a GTX 1660. We’re using the official, unmodified GTX 1650 430.39 public driver from NVIDIA for this review.

We got our card two hours before product launch and got the drivers at launch, but noticed that NVIDIA tried to push drivers heavily through GeForce Experience. We pulled them standalone instead.

EA's Origin launcher has recently gained attention for hosting Apex Legends, one of the present top Battle Royale shooters, but is getting renewed focus as being an easy attack vector for malware. Fortunately, an update has already resolved this issue, and so the pertinent action would be to update Origin (especially if you haven't opened it in a while). Further news this week features the GTX 1650's rumored specs and price, due out allegedly on April 23. We also follow-up on Sony PlayStation 5 news, now officially confirmed to be working with a new AMD Ryzen APU and customized Navi GPU solution.

Show notes below the embedded video, for those preferring reading.

Industry news isn't always as appealing as product news for some of our audience, but this week of industry news is interesting: For one, Tom Petersen, Distinguished Engineer at NVIDIA, will be moving to Intel; for two, ASUS accidentally infected its users with malware after previously being called-out for poor security practices. Show notes for the news below the video embed, for those who prefer written format.

Hardware news is busy this week, as it always is, but we also have some news of our own. Part of GN's team will be in Taiwan and China over the next few weeks, with the rest at home base taking care of testing. For the Taiwan and China trip, we'll be visiting numerous factories for tour videos, walkthroughs, and showcases of how products are made at a lower-level. We have several excursions to tech landmarks also planned, so you'll want to check back regularly as we make this special trip. Check our YT channel daily for uploads. The trip to Asia will likely start its broadcast around 3/6 for us.

We recently revisited the AMD R9 290X from October of 2013, and now it’s time to look back at the GTX 780 Ti from November of 2013. The 780 Ti shipped for $700 MSRP and landed as NVIDIA’s flagship against AMD’s freshly-launched flagship. It was a different era: Memory capacity was limited to 3GB on the 780 Ti, memory frequency was a blazing 7Gbps, and core clock was 875MHz stock or 928MHz boost, using the old Boost 2.0 algorithm that kept a fixed clock in gaming. Overclocking was also more extensible, giving us a bigger upward punch than modern NVIDIA overclocking might permit. Our overclocks on the 780 Ti reference (with fan set to 93%) allowed it to exceed expected performance of the average partner model board, so we have a fairly full range of performance on the 780 Ti.

NVIDIA’s architecture has undergone significant changes since Kepler and the 780 Ti, one of which has been a change in CUDA core efficiency. When NVIDIA moved from Kepler to Maxwell, there was nearly a 40% efficiency gain when CUDA cores are processing input. A 1:1 Maxwell versus Kepler comparison, were such a thing possible, would position Maxwell as superior in efficiency and performance-per-watt, if not just outright performance. It is no surprise then that the 780 Ti’s 2880 CUDA cores, although high even by today’s standards (an RTX 2060 has 1920, but outperforms the 780 Ti), will underperform when compared to modern architectures. This is amplified by significant memory changes, capacity being the most notable, where the GTX 780 Ti’s standard configuration was limited to 3GB and ~7Gbps GDDR5.

Page 1 of 30

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge