Today we’re reviewing the RTX 2060, with additional tests on if an RTX 2060 has enough performance to really run games with ray-tracing – basically Battlefield, at this point – on the TU106 GPU. We have a separate tear-down going live showing the even more insane cooler assembly of the RTX 2060, besting the previous complexity of the RTX 2080 Ti, but today’s focus will be on performance in gaming, thermals, RTX performance, power consumption, and acoustics of the Founders Edition cooler.

The RTX 2060 Founders Edition card is priced at $350 and, unlike previous FE launches in this generation, it is also the price floor. Cards will start at $350 – no more special FE pricing – and scale based upon partner cost. We will primarily be judging price-to-performance based upon the $350 point, so more expensive cards would need to be judged independently.

Our content outline for this RTX 2060 review looks like this:

  • Games: DX12, DX11
  • RTX in BF V
  • Thermals
  • Noise
  • Power

We’re putting more effort into the written conclusion for this one than typically, so be sure to check that as well. Note that we have a separate video upload on the YouTube channel for a tear-down of the card. The PCB, for the record, is an RTX 2070 FE PCB. Same thing.

CES is next week, beginning roughly on Monday (with some Sunday press conferences), and so it's next week that will really be abuzz with hardware news. That'll be true to the extent that most of our coverage will be news, not reviews (some exceptions), and so we'd encourage checking back regularly to stay updated on 2019's biggest planned product launches. Most of our news coverage will go up on the YouTube channel, but we are still working on revamping the site here to improve our ability to post news quickly and in written format.

Anyway, the past two weeks still deserve some catching-up. Of major note, NVIDIA is dealing with a class action complaint, Intel is dropping its IGP for some SKUs, and OLED gaming monitors are coming.

We already reviewed an individual NVIDIA Titan RTX over here, used first for gaming, overclocking, thermal, power, and acoustic testing. We may look at production workloads later, but that’ll wait. We’re primarily waiting for our go-to applications to add RT and Tensor Core support for 3D art. After replacing our bugged Titan RTX (the one that was clock-locked), we were able to proceed with SLI (NVLink) testing for the dual Titan RTX cards. Keep in mind that NVLink is no different from SLI when using these gaming bridges, aside from increased bandwidth, and so we still rely upon AFR and independent resources.

As a reminder, these cards really aren’t built for the way we’re testing them. You’d want a Titan RTX card as a cheaper alternative to Quadros, but with the memory capacity to handle heavy ML/DL or rendering workloads. For games, that extra (expensive) memory goes unused, thus demeaning the value of the Titan RTX cards in the face of a single 2080 Ti.

This is really just for fun, in all honesty. We’ll look at a theoretical “best” gaming GPU setup today, then talk about what you should buy instead.

Today, we’re reviewing the NVIDIA Titan RTX for overclocking, gaming, thermal, and acoustic performance, looking at the first of two cards in the lab. We have a third card arriving to trade for one defective unit, working around the 1350MHz clock lock we discovered, but that won’t be until after this review goes live. The Titan RTX costs $2500, outbidding the RTX 2080 Ti by about 2x, but only enables an additional 4 streaming multiprocessors. With 4 more SMs and 256 more lanes, there’s not much performance to be gained in gaming scenarios. The big gains are in memory-bound applications, as the Titan RTX has 24GB of GDDR6, a marked climb from the 11GB on an RTX 2080 Ti.

An example of a use case could be machine learning or deep learning, or more traditionally, 3D graphics rendering. Some of our in-house Blender project files use so much VRAM that we have to render instead with the slower CPU (rather than CUDA acceleration), as we’ll run out of the 11GB framebuffer too quickly. The same is true for some of our Adobe Premiere video editing projects, where our graph overlays become so complex and high-resolution that they exceed the memory allowance of a 1080 Ti. We are not testing either of these use cases today, though, and are instead focusing our efforts on the gaming and enthusiast market. We know that this is also a big market, and plenty of people want to buy these cards simply because “it’s the best,” or because “most expensive = most best.” We’ll be looking at how much the difference really gets you, with particular interest in thermal performance pursuant to the removal of the blower cooler.

Finally, note that we were stuck at 1350MHz with one of our two samples, something that we’ve worked with NVIDIA to research. The company now has our defective card and has traded us with a working one. We bought the defective Titan RTX, so it was a “real” retail sample. We just wanted to help NVIDIA troubleshoot the issue, and so the company is now working with it.

Despite EOY slow-downs in the news cycle, we still spotted several major industry topics and engineering advancements worthy of recap. Aside from Intel's recent announcements, the most noteworthy news items came out of MIT for engineering efforts on 2.5nm-wide transistors, out of Intel for acquiring more AMD talent, and out of the rumor mill for the RTX 2060, which is mostly confirmed at this point.

As always, show notes are below the embedded video:

The memory supplier price-fixing investigation has been going on for months now, something we spoke about in June (and before then, too). The Chinese government has been leading an investigation into SK Hynix, Samsung, and Micron regarding memory price fixing, pursuant to seemingly endless record-setting profits at higher costs per bit than previous years. That investigation has made some headway, as you'll read in today's news recap, but the "massive evidence" claimed to be found by the Chinese government has not yet been made public. In addition to RAM price fixing news, the Intel CPU shortage looks to be continuing through March, coupled in news with rumors of a 10-core desktop CPU.

Show notes below the video for our weekly recap, as always.

Finding the “best" workstation GPU isn't as straight-forward as finding the best case, best gaming CPU, or best gaming GPU. While games typically scale reliably from one to the next, applications can deliver wildly varying performance. Those gains and losses could be chalked up to architecture, drivers, and also whether or not we're dealing with a true workstation GPU versus a gaming GPU trying to fill-in for workstation purposes.

In this content, we're going to be taking a look at current workstation GPU performance across a range of tests to figure out if there is such thing as a champion among them all. Or, in the very least, we'll figure out how AMD differs from NVIDIA, and how the gaming cards differ from the workstation counterparts. Part of this will look at Quadro vs. RTX or GTX cards, for instance, and WX vs. RX cards for workstation applications. We have GPU benchmarks for video editing (Adobe Premiere), 3D modeling and rendering (Blender, V-Ray, 3ds Max, Maya), AutoCAD, SolidWorks, Redshift, Octane Bench, and more.

Though NVIDIA's Quadro RTX lineup has been available for a few months, review samples have been slow to escape the grasp of NVIDIA, and if we had to guess why, it's likely due to the fact that few software solutions are available that can take advantage of the features right now. That excludes deep-learning tests which can benefit from the Tensor cores, but for optimizations derived from the RT core, we're still waiting. It seems likely that Chaos Group's V-Ray is going to be one of the first plugins to hit the market that will support NVIDIA's RTX, though Redshift, Octane, Arnold, Renderman, and many others have planned support.

The great thing for those planning to go with a gaming GPU for workstation use is that where rendering is concerned, the performance between gaming and workstation cards is going to be largely equivalent. Where performance can improve on workstation cards is with viewport performance optimizations; ultimately, the smoother the viewport, the less tedious it is to manipulate a scene.

Across all of the results ahead, you'll see that there are many angles to view workstation GPUs from, and that there isn't really such thing as a one-size-fits all - not like there is on the gaming side. There is such thing as an ultimate choice though, so if you're not afraid of spending substantially above the gaming equivalents for the best performance, there are models vying for your attention.

The RTX 2080 Ti failures aren’t as widespread as they might have seemed from initial reddit threads, but they are absolutely real. When discussing internally whether we thought the issue of artifacting and dying RTX cards had been blown out of proportion by the internet, we had two frames of mind: On one side, the level of attention did seem disproportionate to the size of the issue, particularly as RMA rates are within the norm. Partners are still often under 1% and retailers are under 3.5%, which is standard. The other frame of mind is that, actually, nothing was blown out of proportion for people who spent $1250 and received a brick in return. For those affected buyers, the artifacting is absolutely a real issue, and it deserves real attention.

This content marks the closing of a storyline for us. We published previous videos detailing a few of the failures on our viewers’ cards (borrowed by GN on loan), including an unrelated issue of a 1350MHz lock and BSOD issue. We also tested cards in our livestream to show what the artifacting looks like, seen here. Today, we’re mostly looking at thermals, firmware, the OS, downclocking impact, and finding a conclusion of what the problem isn’t (rather than what it 100% is).

With over a dozen cards mailed in to us, we had a lot to sort through over the past week. This issue certainly exists in a very real way for those who spent $1200+ on an unusable video card, but it isn’t affecting everyone. It’s far from “widespread,” fortunately, and our present understanding is that RMA rates remain within reason for most of the industry. That said, NVIDIA’s response times to some RMA requests have been slow, from what our viewers have expressed, and replacements can take upwards of a month given supply constraints in some regions. That’s a problem.

This content stars our viewers and readers. We charted the most popular video cards over the launch period for NVIDIA’s RTX devices, as we were curious if GTX or RTX gained the most sales in this time. We’ve also got some AMD data toward the end, but the focus here is on a shifting momentum between Pascal and Turing architectures and what the consumers want.

We’re looking exclusively at what our viewers and readers have purchased over the two-month launch window since RTX was announced. This samples several hundred purchases, but is in no way at all a representative sample of the whole. Keep in mind that we have a lot of sampling biases here, the primary of which is that it’s our audience – that means these are people who are more enthusiast-leaning, likely buy higher end, and probably follow at least some of our suggestions. You can’t extrapolate this data market-wide, but it is an interesting cross-section for our audience.

Although the year is winding down, hardware announcements are still heavy through the mid-point in November: NVIDIA pushed a major driver update and has done well to address BSOD issues, the company has added new suppliers to its memory list (a good thing), and RTX should start getting support once Windows updates roll-out. On the flip-side, AMD is pushing 7nm CPU and GPU discussion as high-end serve parts hit the market.

Show notes below the embedded video.

Page 1 of 29

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge