Metro: Exodus is the next title to include NVIDIA RTX technology, leveraging Microsoft’s DXR. We already looked at the RTX implementation from a qualitative standpoint (in video), talking about the pros and cons of global illumination via RTX, and now we’re back to benchmark the performance from a quantitative standpoint.
The Metro series has long been used as a benchmarking standard. As always, with a built-in benchmark, one of the most important things to look at is the accuracy of that benchmark as it pertains to the “real” game. Being inconsistent with in-game performance doesn’t necessarily invalidate a benchmark’s usefulness, though, it’s just that the light in which that benchmark is viewed must be kept in mind. Without accuracy to in-game performance, the benchmark tools mostly become synthetic benchmarks: They’re good for relative performance measurements between cards, but not necessarily absolute performance. That’s completely fine, too, as that’s mostly what we look for in reviews. The only (really) important thing is that performance scaling is consistent between cards in both pre-built benchmarks and in-game benchmarks.
Finding something to actually leverage the increased memory bandwidth of Radeon VII is a challenge. Few games will genuinely use more memory than what’s found on an RTX 2080, let alone 16GB on the Radeon VII, and most VRAM capacity utilization reporting is wildly inaccurate as it only reports allocated memory and not necessarily used memory. To best benchmark the potential advantages of Radeon VII, which would primarily be relegated to memory bandwidth, we set up a targeted feature test to look at anti-aliasing and high-resolution benchmarks. Consider this an academic exercise on Radeon VII’s capabilities.
Our AMD Radeon VII review is one of our most in-depth in a while. The new $700 AMD flagship is a repurposed Instinct card, down-costed for gaming and some productivity tasks and positioned to battle the RTX 2080 head-to-head. In today’s benchmarks, we’ll look uniquely at Radeon VII cooler mounting pressure, graphite thermal pad versus paste performance, gaming benchmarks, overclocking, noise, power consumption, Luxmark OpenCL performance, and more.
We already took apart AMD’s Radeon VII card, remarking on its interesting Hitachi HM03 graphite thermal pad and vapor chamber. We also analyzed its VRM and PCB, showing impressive build quality from AMD. These are only part of the story, though – the more important aspect is the silicon, which we’re looking at today. At $700, Radeon VII is positioned against the RTX 2080 and now-discontinued GTX 1080 Ti (the two tested identically). Radeon VII has some interesting use cases in “content creation” (or Adobe Premiere, mostly) where GPU memory becomes a limiting factor. Due to time constraints following significant driver-related setbacks in testing, we will be revisiting the card with a heavier focus on these “content creator” tests. For now, we are focusing primarily on the following:
The AMD Radeon VII embargo for “unboxings” has lifted and, although we don’t participate in the marketing that is a content-filtered “unboxing,” a regular part of our box-opening process involves taking the product apart. For today, restrictions are placed on performance discussion and product review, but we are free to show the product and handle it physically. You’ll have to check back for the review, which should likely coincide with the release date of February 7.
This content is primarily video, as our tear-downs show the experience of taking the product apart (and discoveries as we go), but we’ll recap the main point of interest here. Text continues after the embedded video:
GPU manufacturer Visiontek is old enough to have accumulated a warehouse of unsold, refurbished cards. Once in a while, they’ll clear stock by selling them off in cheap mystery boxes. It’s been a long time since we last reported on these boxes, and GPU development has moved forward quite a bit, so we wanted to see what we could get for our money. PCIe cards were $10 for higher-end and $5 for lower, and AGP and PCI cards were both $5. On the off chance that Visiontek would recognize Steve’s name and send him better-than-average cards, we placed two identical orders, one in Steve’s name and one in mine (Patrick). Each order was for one better PCIe card, one worse, one PCI, and one AGP.
The AMD R9 290X, a 2013 release, was the once-flagship of the 200 series, later superseded by the 390X refresh, (sort of) the Fury X, and eventually the RX-series cards. The R9 290X typically ran with 4GB of memory, although the 390X made 8GB somewhat commonplace, and was a strong performer for early 1440p gaming and high-quality 1080p gaming. The goal posts have moved, of course, as time has mandated that games get more difficult to render, but the 290X is still a strong enough card to warrant a revisit in 2019.
The R9 290X still has some impressive traits today, and those influence results to a point of being clearly visible at certain resolutions. One of the most noteworthy features is its 64 count of ROPs, where the output is converted into a bitmapped image, and its 176 TMUs. The ROPs assist in improving performance scaling as resolution increases, something that also correlates with higher anti-aliasing values (same idea – sampling more times per pixel or drawing more pixels). For this reason, we’ll want to pay careful attention to performance scaling at 1080p, 1440p, and 4K versus some other device, like the RX 580. The RX 580 is a powerful card for its price-point, often managing comparable performance to the 290X while running half the ROPs and 144 TMUs, but the 290X can close the gap (mildly) at higher resolutions. This isn’t particularly useful to know, but is interesting, and illustrates how specific parts of the GPU can change the performance stack under different rendering conditions.
Today, we’re testing with a reference R9 290X that’s been run through both stock and overclocked, giving us a look at the bottom-end performance and average partner model or OC performance. This should cover most the spectrum of R9 290X cards.
Today we’re reviewing the RTX 2060, with additional tests on if an RTX 2060 has enough performance to really run games with ray-tracing – basically Battlefield, at this point – on the TU106 GPU. We have a separate tear-down going live showing the even more insane cooler assembly of the RTX 2060, besting the previous complexity of the RTX 2080 Ti, but today’s focus will be on performance in gaming, thermals, RTX performance, power consumption, and acoustics of the Founders Edition cooler.
The RTX 2060 Founders Edition card is priced at $350 and, unlike previous FE launches in this generation, it is also the price floor. Cards will start at $350 – no more special FE pricing – and scale based upon partner cost. We will primarily be judging price-to-performance based upon the $350 point, so more expensive cards would need to be judged independently.
Our content outline for this RTX 2060 review looks like this:
- Games: DX12, DX11
- RTX in BF V
We’re putting more effort into the written conclusion for this one than typically, so be sure to check that as well. Note that we have a separate video upload on the YouTube channel for a tear-down of the card. The PCB, for the record, is an RTX 2070 FE PCB. Same thing.
The XFX RX 590 Fatboy is a card we tore-down a few months ago, whereupon we complained about its thermal solution and noted inefficiencies in the design. These proved deficient in today’s testing, as expected, but the silicon itself – AMD’s GPU – remained a bit of a variable for us. The RX 590 GPU, ignoring XFX and its component of the review (momentarily), is potentially a stronger argument between the GTX 1060 and GTX 1070. It’s a pre-pre-overclocked RX 480 – or a pre-overclocked RX 580 – and, to AMD’s credit, it has pushed this silicon about as far is it can go.
Today, we’re benchmarking the RX 590 (the “Fatboy” model, specifically) against the GTX 1060, RX 580 overclocked, GTX 1070, and more.
Today’s benchmark is a case study by the truest definition of the phrase: We are benchmarking a single sample, overweight video card to test the performance impact of its severe sag. The Gigabyte GTX 1080 Ti Xtreme was poorly received by our outlet when we reviewed it in 2017, primarily for its needlessly large size that amounted to worse thermal and acoustic performance than smaller, cheaper competitors. The card is heavy and constructed using through-bolts and complicated assortments of hardware, whereas competition achieved smaller, more effective designs that didn’t sag.
As is tradition, we put the GTX 1080 Ti Xtreme in one of our production machines alongside all of the other worst hardware we worked with, and so the 1080 Ti Xtreme was in use in a “real” system for about a year. That amount of time has allowed nature – mostly gravity – to take its course, and so the passage of time has slowly pulled the 1080 Ti Xtreme apart. Now, after a year of forced labor in our oldest rendering rig, we get to see the real side-effects of a needlessly heavy card that’s poorly reinforced internally. We’ll be testing the impact of GPU sag in today’s content.
We already reviewed an individual NVIDIA Titan RTX over here, used first for gaming, overclocking, thermal, power, and acoustic testing. We may look at production workloads later, but that’ll wait. We’re primarily waiting for our go-to applications to add RT and Tensor Core support for 3D art. After replacing our bugged Titan RTX (the one that was clock-locked), we were able to proceed with SLI (NVLink) testing for the dual Titan RTX cards. Keep in mind that NVLink is no different from SLI when using these gaming bridges, aside from increased bandwidth, and so we still rely upon AFR and independent resources.
As a reminder, these cards really aren’t built for the way we’re testing them. You’d want a Titan RTX card as a cheaper alternative to Quadros, but with the memory capacity to handle heavy ML/DL or rendering workloads. For games, that extra (expensive) memory goes unused, thus demeaning the value of the Titan RTX cards in the face of a single 2080 Ti.
This is really just for fun, in all honesty. We’ll look at a theoretical “best” gaming GPU setup today, then talk about what you should buy instead.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.