Hardware Guides

Metro: Exodus is the next title to include NVIDIA RTX technology, leveraging Microsoft’s DXR. We already looked at the RTX implementation from a qualitative standpoint (in video), talking about the pros and cons of global illumination via RTX, and now we’re back to benchmark the performance from a quantitative standpoint.

The Metro series has long been used as a benchmarking standard. As always, with a built-in benchmark, one of the most important things to look at is the accuracy of that benchmark as it pertains to the “real” game. Being inconsistent with in-game performance doesn’t necessarily invalidate a benchmark’s usefulness, though, it’s just that the light in which that benchmark is viewed must be kept in mind. Without accuracy to in-game performance, the benchmark tools mostly become synthetic benchmarks: They’re good for relative performance measurements between cards, but not necessarily absolute performance. That’s completely fine, too, as that’s mostly what we look for in reviews. The only (really) important thing is that performance scaling is consistent between cards in both pre-built benchmarks and in-game benchmarks.

Finding something to actually leverage the increased memory bandwidth of Radeon VII is a challenge. Few games will genuinely use more memory than what’s found on an RTX 2080, let alone 16GB on the Radeon VII, and most VRAM capacity utilization reporting is wildly inaccurate as it only reports allocated memory and not necessarily used memory. To best benchmark the potential advantages of Radeon VII, which would primarily be relegated to memory bandwidth, we set up a targeted feature test to look at anti-aliasing and high-resolution benchmarks. Consider this an academic exercise on Radeon VII’s capabilities.

The AMD Radeon VII embargo for “unboxings” has lifted and, although we don’t participate in the marketing that is a content-filtered “unboxing,” a regular part of our box-opening process involves taking the product apart. For today, restrictions are placed on performance discussion and product review, but we are free to show the product and handle it physically. You’ll have to check back for the review, which should likely coincide with the release date of February 7.

This content is primarily video, as our tear-downs show the experience of taking the product apart (and discoveries as we go), but we’ll recap the main point of interest here. Text continues after the embedded video:

GPU manufacturer Visiontek is old enough to have accumulated a warehouse of unsold, refurbished cards. Once in a while, they’ll clear stock by selling them off in cheap mystery boxes. It’s been a long time since we last reported on these boxes, and GPU development has moved forward quite a bit, so we wanted to see what we could get for our money. PCIe cards were $10 for higher-end and $5 for lower, and AGP and PCI cards were both $5. On the off chance that Visiontek would recognize Steve’s name and send him better-than-average cards, we placed two identical orders, one in Steve’s name and one in mine (Patrick). Each order was for one better PCIe card, one worse, one PCI, and one AGP.

The AMD R9 290X, a 2013 release, was the once-flagship of the 200 series, later superseded by the 390X refresh, (sort of) the Fury X, and eventually the RX-series cards. The R9 290X typically ran with 4GB of memory, although the 390X made 8GB somewhat commonplace, and was a strong performer for early 1440p gaming and high-quality 1080p gaming. The goal posts have moved, of course, as time has mandated that games get more difficult to render, but the 290X is still a strong enough card to warrant a revisit in 2019.

The R9 290X still has some impressive traits today, and those influence results to a point of being clearly visible at certain resolutions. One of the most noteworthy features is its 64 count of ROPs, where the output is converted into a bitmapped image, and its 176 TMUs. The ROPs assist in improving performance scaling as resolution increases, something that also correlates with higher anti-aliasing values (same idea – sampling more times per pixel or drawing more pixels). For this reason, we’ll want to pay careful attention to performance scaling at 1080p, 1440p, and 4K versus some other device, like the RX 580. The RX 580 is a powerful card for its price-point, often managing comparable performance to the 290X while running half the ROPs and 144 TMUs, but the 290X can close the gap (mildly) at higher resolutions. This isn’t particularly useful to know, but is interesting, and illustrates how specific parts of the GPU can change the performance stack under different rendering conditions.

Today, we’re testing with a reference R9 290X that’s been run through both stock and overclocked, giving us a look at the bottom-end performance and average partner model or OC performance. This should cover most the spectrum of R9 290X cards.

Today’s benchmark is a case study by the truest definition of the phrase: We are benchmarking a single sample, overweight video card to test the performance impact of its severe sag. The Gigabyte GTX 1080 Ti Xtreme was poorly received by our outlet when we reviewed it in 2017, primarily for its needlessly large size that amounted to worse thermal and acoustic performance than smaller, cheaper competitors. The card is heavy and constructed using through-bolts and complicated assortments of hardware, whereas competition achieved smaller, more effective designs that didn’t sag.

As is tradition, we put the GTX 1080 Ti Xtreme in one of our production machines alongside all of the other worst hardware we worked with, and so the 1080 Ti Xtreme was in use in a “real” system for about a year. That amount of time has allowed nature – mostly gravity – to take its course, and so the passage of time has slowly pulled the 1080 Ti Xtreme apart. Now, after a year of forced labor in our oldest rendering rig, we get to see the real side-effects of a needlessly heavy card that’s poorly reinforced internally. We’ll be testing the impact of GPU sag in today’s content.

We’re revisiting the Intel i7-7700K today, following its not-so-distant launch of January of 2017 for about $340 USD. The 7700K was shortly followed by the i7-8700K, still selling well, which later in the same year but with an additional two cores and four threads. That was a big gain, and one which stacked atop the 7700K’s already relatively high overclocking potential and regular 4.9 to 5GHz OCs. This revisit looks at how the 7700K compares to modern Coffee Lake 8000 and 9000 CPUs (like the 9700K), alongside modern Ryzen CPUs from the Zen+ generation.

For a quick reminder of 7700K specs versus “modern” CPUs – or, at least, as much more “modern” as a 1-year-later launch is – remember that the 7700K was the last of the 4C/8T parts in the i7 line, still using hyper-threading to hit 8T. The 8700K was the next launch in the family, releasing at 6C/12T and changing the lineup substantially at a similar price-point, albeit slightly higher. The 9900K was the next remarkable launch but exited the price category and became more of a low-end HEDT CPU. The 9700K is the truer follow-up to the 7700K, but oddly regresses to an 8T configuration from the 8700K’s 12T configuration, except it instead uses 8 physical cores for all 8 threads, rather than 6 physical cores. Separately, the 7700K critically operated with 8MB of total cache, as opposed to 12MB on the 9700K. The price also changed, with the 7700K closer to $340 and the 9700K at $400 to $430, depending. Even taking the $400 mark, that’s more than adjustment for inflation.

We’re revisiting the 7700K today, looking at whether buyers truly got the short straw with the subsequent and uncharacteristically rapid release of the 8700K. Note also, however, that the 8700K didn’t really properly release at end of 2017. That was more of a paper launch, with few products actually available at launch. Regardless, the feeling is the same for the 7700K buyer.

We already reviewed an individual NVIDIA Titan RTX over here, used first for gaming, overclocking, thermal, power, and acoustic testing. We may look at production workloads later, but that’ll wait. We’re primarily waiting for our go-to applications to add RT and Tensor Core support for 3D art. After replacing our bugged Titan RTX (the one that was clock-locked), we were able to proceed with SLI (NVLink) testing for the dual Titan RTX cards. Keep in mind that NVLink is no different from SLI when using these gaming bridges, aside from increased bandwidth, and so we still rely upon AFR and independent resources.

As a reminder, these cards really aren’t built for the way we’re testing them. You’d want a Titan RTX card as a cheaper alternative to Quadros, but with the memory capacity to handle heavy ML/DL or rendering workloads. For games, that extra (expensive) memory goes unused, thus demeaning the value of the Titan RTX cards in the face of a single 2080 Ti.

This is really just for fun, in all honesty. We’ll look at a theoretical “best” gaming GPU setup today, then talk about what you should buy instead.

Finding the “best" workstation GPU isn't as straight-forward as finding the best case, best gaming CPU, or best gaming GPU. While games typically scale reliably from one to the next, applications can deliver wildly varying performance. Those gains and losses could be chalked up to architecture, drivers, and also whether or not we're dealing with a true workstation GPU versus a gaming GPU trying to fill-in for workstation purposes.

In this content, we're going to be taking a look at current workstation GPU performance across a range of tests to figure out if there is such thing as a champion among them all. Or, in the very least, we'll figure out how AMD differs from NVIDIA, and how the gaming cards differ from the workstation counterparts. Part of this will look at Quadro vs. RTX or GTX cards, for instance, and WX vs. RX cards for workstation applications. We have GPU benchmarks for video editing (Adobe Premiere), 3D modeling and rendering (Blender, V-Ray, 3ds Max, Maya), AutoCAD, SolidWorks, Redshift, Octane Bench, and more.

Though NVIDIA's Quadro RTX lineup has been available for a few months, review samples have been slow to escape the grasp of NVIDIA, and if we had to guess why, it's likely due to the fact that few software solutions are available that can take advantage of the features right now. That excludes deep-learning tests which can benefit from the Tensor cores, but for optimizations derived from the RT core, we're still waiting. It seems likely that Chaos Group's V-Ray is going to be one of the first plugins to hit the market that will support NVIDIA's RTX, though Redshift, Octane, Arnold, Renderman, and many others have planned support.

The great thing for those planning to go with a gaming GPU for workstation use is that where rendering is concerned, the performance between gaming and workstation cards is going to be largely equivalent. Where performance can improve on workstation cards is with viewport performance optimizations; ultimately, the smoother the viewport, the less tedious it is to manipulate a scene.

Across all of the results ahead, you'll see that there are many angles to view workstation GPUs from, and that there isn't really such thing as a one-size-fits all - not like there is on the gaming side. There is such thing as an ultimate choice though, so if you're not afraid of spending substantially above the gaming equivalents for the best performance, there are models vying for your attention.

As we get into the holiday spirit here at GN, it’s time for our year-end round-ups and best of series—probably some of our favorite content. These guides provide a snapshot of what the year had to offer in certain spaces, like SSDs, for instance. You can check our most recent guides for the Best Cases of 2018 and Best CPUs of 2018.

These guides will also help users navigate the overwhelming amount of Black Friday and Cyber Monday marketing ahead of us all. SSD prices have been especially good lately, and the holidays should certainly net opportunities for even better deals.

That said, buying something just because it’s cheap isn’t ever a good idea, really; better to know what’s best first, then buy cheap—or cheaper than usual, anyway. This guide will take the legwork out of distinguishing what the year’s best SSDs are based on use case and price. Today, we're looking at the best SSDs for gaming PCs, workstations, budget PC builds, and for cheap, high-capacity storage. 1TB SSDs are more affordable than ever now, and we'll explore some of those listings.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge