Our AMD Radeon VII review is one of our most in-depth in a while. The new $700 AMD flagship is a repurposed Instinct card, down-costed for gaming and some productivity tasks and positioned to battle the RTX 2080 head-to-head. In today’s benchmarks, we’ll look uniquely at Radeon VII cooler mounting pressure, graphite thermal pad versus paste performance, gaming benchmarks, overclocking, noise, power consumption, Luxmark OpenCL performance, and more.

We already took apart AMD’s Radeon VII card, remarking on its interesting Hitachi HM03 graphite thermal pad and vapor chamber. We also analyzed its VRM and PCB, showing impressive build quality from AMD. These are only part of the story, though – the more important aspect is the silicon, which we’re looking at today. At $700, Radeon VII is positioned against the RTX 2080 and now-discontinued GTX 1080 Ti (the two tested identically). Radeon VII has some interesting use cases in “content creation” (or Adobe Premiere, mostly) where GPU memory becomes a limiting factor. Due to time constraints following significant driver-related setbacks in testing, we will be revisiting the card with a heavier focus on these “content creator” tests. For now, we are focusing primarily on the following:

News for this week primarily focused on the industry, as opposed to products, and so highlighted AMD earnings, Microsoft earnings, and NVIDIA earnings. There are interesting stories within each of these topics: For Microsoft, the company indirectly blamed Intel's CPU shortage as impacting its growth projections for Windows 10; for NVIDIA, GPU sales slow-downs are still impacting the bottom line, and the company has adjusted its revenue projections accordingly; for AMD, the company saw an uptick for 4Q18, but is facing a slow quarter for 1Q19.

Beyond these stories, areas of interest include an AI white-hat hacking machine (named "Mayhem," a water-cooled supercomputer), Intel expansions and investments, and Intel's sort-of-new CEO.

Show notes below the embedded video, as always.

For this hardware news episode, we compiled more information ascertained at CES, whereupon we tried to validate or invalidate swirling rumors about Ryzen 3000, GTX 1660 parts, and Ice Lake. The show gave us a good opportunity, as always, to talk with people in the know and learn more about the goings-on in the industry. There was plenty of "normal" news, too, like DRAM price declines, surges in AMD notebook interest, and more.

The show notes are below the video. This time, we have a few stories in the notes below that didn't make the cut for the video.

Today we’re reviewing the RTX 2060, with additional tests on if an RTX 2060 has enough performance to really run games with ray-tracing – basically Battlefield, at this point – on the TU106 GPU. We have a separate tear-down going live showing the even more insane cooler assembly of the RTX 2060, besting the previous complexity of the RTX 2080 Ti, but today’s focus will be on performance in gaming, thermals, RTX performance, power consumption, and acoustics of the Founders Edition cooler.

The RTX 2060 Founders Edition card is priced at $350 and, unlike previous FE launches in this generation, it is also the price floor. Cards will start at $350 – no more special FE pricing – and scale based upon partner cost. We will primarily be judging price-to-performance based upon the $350 point, so more expensive cards would need to be judged independently.

Our content outline for this RTX 2060 review looks like this:

  • Games: DX12, DX11
  • RTX in BF V
  • Thermals
  • Noise
  • Power

We’re putting more effort into the written conclusion for this one than typically, so be sure to check that as well. Note that we have a separate video upload on the YouTube channel for a tear-down of the card. The PCB, for the record, is an RTX 2070 FE PCB. Same thing.

CES is next week, beginning roughly on Monday (with some Sunday press conferences), and so it's next week that will really be abuzz with hardware news. That'll be true to the extent that most of our coverage will be news, not reviews (some exceptions), and so we'd encourage checking back regularly to stay updated on 2019's biggest planned product launches. Most of our news coverage will go up on the YouTube channel, but we are still working on revamping the site here to improve our ability to post news quickly and in written format.

Anyway, the past two weeks still deserve some catching-up. Of major note, NVIDIA is dealing with a class action complaint, Intel is dropping its IGP for some SKUs, and OLED gaming monitors are coming.

We already reviewed an individual NVIDIA Titan RTX over here, used first for gaming, overclocking, thermal, power, and acoustic testing. We may look at production workloads later, but that’ll wait. We’re primarily waiting for our go-to applications to add RT and Tensor Core support for 3D art. After replacing our bugged Titan RTX (the one that was clock-locked), we were able to proceed with SLI (NVLink) testing for the dual Titan RTX cards. Keep in mind that NVLink is no different from SLI when using these gaming bridges, aside from increased bandwidth, and so we still rely upon AFR and independent resources.

As a reminder, these cards really aren’t built for the way we’re testing them. You’d want a Titan RTX card as a cheaper alternative to Quadros, but with the memory capacity to handle heavy ML/DL or rendering workloads. For games, that extra (expensive) memory goes unused, thus demeaning the value of the Titan RTX cards in the face of a single 2080 Ti.

This is really just for fun, in all honesty. We’ll look at a theoretical “best” gaming GPU setup today, then talk about what you should buy instead.

Today, we’re reviewing the NVIDIA Titan RTX for overclocking, gaming, thermal, and acoustic performance, looking at the first of two cards in the lab. We have a third card arriving to trade for one defective unit, working around the 1350MHz clock lock we discovered, but that won’t be until after this review goes live. The Titan RTX costs $2500, outbidding the RTX 2080 Ti by about 2x, but only enables an additional 4 streaming multiprocessors. With 4 more SMs and 256 more lanes, there’s not much performance to be gained in gaming scenarios. The big gains are in memory-bound applications, as the Titan RTX has 24GB of GDDR6, a marked climb from the 11GB on an RTX 2080 Ti.

An example of a use case could be machine learning or deep learning, or more traditionally, 3D graphics rendering. Some of our in-house Blender project files use so much VRAM that we have to render instead with the slower CPU (rather than CUDA acceleration), as we’ll run out of the 11GB framebuffer too quickly. The same is true for some of our Adobe Premiere video editing projects, where our graph overlays become so complex and high-resolution that they exceed the memory allowance of a 1080 Ti. We are not testing either of these use cases today, though, and are instead focusing our efforts on the gaming and enthusiast market. We know that this is also a big market, and plenty of people want to buy these cards simply because “it’s the best,” or because “most expensive = most best.” We’ll be looking at how much the difference really gets you, with particular interest in thermal performance pursuant to the removal of the blower cooler.

Finally, note that we were stuck at 1350MHz with one of our two samples, something that we’ve worked with NVIDIA to research. The company now has our defective card and has traded us with a working one. We bought the defective Titan RTX, so it was a “real” retail sample. We just wanted to help NVIDIA troubleshoot the issue, and so the company is now working with it.

Despite EOY slow-downs in the news cycle, we still spotted several major industry topics and engineering advancements worthy of recap. Aside from Intel's recent announcements, the most noteworthy news items came out of MIT for engineering efforts on 2.5nm-wide transistors, out of Intel for acquiring more AMD talent, and out of the rumor mill for the RTX 2060, which is mostly confirmed at this point.

As always, show notes are below the embedded video:

The memory supplier price-fixing investigation has been going on for months now, something we spoke about in June (and before then, too). The Chinese government has been leading an investigation into SK Hynix, Samsung, and Micron regarding memory price fixing, pursuant to seemingly endless record-setting profits at higher costs per bit than previous years. That investigation has made some headway, as you'll read in today's news recap, but the "massive evidence" claimed to be found by the Chinese government has not yet been made public. In addition to RAM price fixing news, the Intel CPU shortage looks to be continuing through March, coupled in news with rumors of a 10-core desktop CPU.

Show notes below the video for our weekly recap, as always.

Finding the “best" workstation GPU isn't as straight-forward as finding the best case, best gaming CPU, or best gaming GPU. While games typically scale reliably from one to the next, applications can deliver wildly varying performance. Those gains and losses could be chalked up to architecture, drivers, and also whether or not we're dealing with a true workstation GPU versus a gaming GPU trying to fill-in for workstation purposes.

In this content, we're going to be taking a look at current workstation GPU performance across a range of tests to figure out if there is such thing as a champion among them all. Or, in the very least, we'll figure out how AMD differs from NVIDIA, and how the gaming cards differ from the workstation counterparts. Part of this will look at Quadro vs. RTX or GTX cards, for instance, and WX vs. RX cards for workstation applications. We have GPU benchmarks for video editing (Adobe Premiere), 3D modeling and rendering (Blender, V-Ray, 3ds Max, Maya), AutoCAD, SolidWorks, Redshift, Octane Bench, and more.

Though NVIDIA's Quadro RTX lineup has been available for a few months, review samples have been slow to escape the grasp of NVIDIA, and if we had to guess why, it's likely due to the fact that few software solutions are available that can take advantage of the features right now. That excludes deep-learning tests which can benefit from the Tensor cores, but for optimizations derived from the RT core, we're still waiting. It seems likely that Chaos Group's V-Ray is going to be one of the first plugins to hit the market that will support NVIDIA's RTX, though Redshift, Octane, Arnold, Renderman, and many others have planned support.

The great thing for those planning to go with a gaming GPU for workstation use is that where rendering is concerned, the performance between gaming and workstation cards is going to be largely equivalent. Where performance can improve on workstation cards is with viewport performance optimizations; ultimately, the smoother the viewport, the less tedious it is to manipulate a scene.

Across all of the results ahead, you'll see that there are many angles to view workstation GPUs from, and that there isn't really such thing as a one-size-fits all - not like there is on the gaming side. There is such thing as an ultimate choice though, so if you're not afraid of spending substantially above the gaming equivalents for the best performance, there are models vying for your attention.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge