Steve Burke

Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

Today’s benchmark is a case study by the truest definition of the phrase: We are benchmarking a single sample, overweight video card to test the performance impact of its severe sag. The Gigabyte GTX 1080 Ti Xtreme was poorly received by our outlet when we reviewed it in 2017, primarily for its needlessly large size that amounted to worse thermal and acoustic performance than smaller, cheaper competitors. The card is heavy and constructed using through-bolts and complicated assortments of hardware, whereas competition achieved smaller, more effective designs that didn’t sag.

As is tradition, we put the GTX 1080 Ti Xtreme in one of our production machines alongside all of the other worst hardware we worked with, and so the 1080 Ti Xtreme was in use in a “real” system for about a year. That amount of time has allowed nature – mostly gravity – to take its course, and so the passage of time has slowly pulled the 1080 Ti Xtreme apart. Now, after a year of forced labor in our oldest rendering rig, we get to see the real side-effects of a needlessly heavy card that’s poorly reinforced internally. We’ll be testing the impact of GPU sag in today’s content.

We’re revisiting the Intel i7-7700K today, following its not-so-distant launch of January of 2017 for about $340 USD. The 7700K was shortly followed by the i7-8700K, still selling well, which later in the same year but with an additional two cores and four threads. That was a big gain, and one which stacked atop the 7700K’s already relatively high overclocking potential and regular 4.9 to 5GHz OCs. This revisit looks at how the 7700K compares to modern Coffee Lake 8000 and 9000 CPUs (like the 9700K), alongside modern Ryzen CPUs from the Zen+ generation.

For a quick reminder of 7700K specs versus “modern” CPUs – or, at least, as much more “modern” as a 1-year-later launch is – remember that the 7700K was the last of the 4C/8T parts in the i7 line, still using hyper-threading to hit 8T. The 8700K was the next launch in the family, releasing at 6C/12T and changing the lineup substantially at a similar price-point, albeit slightly higher. The 9900K was the next remarkable launch but exited the price category and became more of a low-end HEDT CPU. The 9700K is the truer follow-up to the 7700K, but oddly regresses to an 8T configuration from the 8700K’s 12T configuration, except it instead uses 8 physical cores for all 8 threads, rather than 6 physical cores. Separately, the 7700K critically operated with 8MB of total cache, as opposed to 12MB on the 9700K. The price also changed, with the 7700K closer to $340 and the 9700K at $400 to $430, depending. Even taking the $400 mark, that’s more than adjustment for inflation.

We’re revisiting the 7700K today, looking at whether buyers truly got the short straw with the subsequent and uncharacteristically rapid release of the 8700K. Note also, however, that the 8700K didn’t really properly release at end of 2017. That was more of a paper launch, with few products actually available at launch. Regardless, the feeling is the same for the 7700K buyer.

Intel’s new i7-9700K is available for about $400 to $430, which lands it between the 9900K – priced at around $550, on a good day – and the 8700K’s $370 price-point. We got ours for $400, looking to test the new 8C/8T CPU versus the not-that-old 8700K and the hyperthreaded 9900K of similar spec. Intel made a big move away from 4C/8T CPUs and the incumbent pricing structure, with the 9700K acting as the first K-SKU i7 to lack hyperthreading in some time.

The elimination of hyperthreading primarily calls into question whether hyperthreading is even “worth it” once running on an 8C, high-frequency CPU. The trouble is that this is no longer a linear move. In years past, a move from 4C/8T to 8C/8T would be easier to discuss, but Intel has moved from a 6C/12T 8700K part of a lower price – in the $350-$370 range, on average – to an 8C/8T 9700K at a higher price. Two more physical cores come at the cost of four additional threads, which can post benefit in some thread-bound workloads – we’ll look at those in this content.

We already reviewed an individual NVIDIA Titan RTX over here, used first for gaming, overclocking, thermal, power, and acoustic testing. We may look at production workloads later, but that’ll wait. We’re primarily waiting for our go-to applications to add RT and Tensor Core support for 3D art. After replacing our bugged Titan RTX (the one that was clock-locked), we were able to proceed with SLI (NVLink) testing for the dual Titan RTX cards. Keep in mind that NVLink is no different from SLI when using these gaming bridges, aside from increased bandwidth, and so we still rely upon AFR and independent resources.

As a reminder, these cards really aren’t built for the way we’re testing them. You’d want a Titan RTX card as a cheaper alternative to Quadros, but with the memory capacity to handle heavy ML/DL or rendering workloads. For games, that extra (expensive) memory goes unused, thus demeaning the value of the Titan RTX cards in the face of a single 2080 Ti.

This is really just for fun, in all honesty. We’ll look at a theoretical “best” gaming GPU setup today, then talk about what you should buy instead.

Today, we’re reviewing the NVIDIA Titan RTX for overclocking, gaming, thermal, and acoustic performance, looking at the first of two cards in the lab. We have a third card arriving to trade for one defective unit, working around the 1350MHz clock lock we discovered, but that won’t be until after this review goes live. The Titan RTX costs $2500, outbidding the RTX 2080 Ti by about 2x, but only enables an additional 4 streaming multiprocessors. With 4 more SMs and 256 more lanes, there’s not much performance to be gained in gaming scenarios. The big gains are in memory-bound applications, as the Titan RTX has 24GB of GDDR6, a marked climb from the 11GB on an RTX 2080 Ti.

An example of a use case could be machine learning or deep learning, or more traditionally, 3D graphics rendering. Some of our in-house Blender project files use so much VRAM that we have to render instead with the slower CPU (rather than CUDA acceleration), as we’ll run out of the 11GB framebuffer too quickly. The same is true for some of our Adobe Premiere video editing projects, where our graph overlays become so complex and high-resolution that they exceed the memory allowance of a 1080 Ti. We are not testing either of these use cases today, though, and are instead focusing our efforts on the gaming and enthusiast market. We know that this is also a big market, and plenty of people want to buy these cards simply because “it’s the best,” or because “most expensive = most best.” We’ll be looking at how much the difference really gets you, with particular interest in thermal performance pursuant to the removal of the blower cooler.

Finally, note that we were stuck at 1350MHz with one of our two samples, something that we’ve worked with NVIDIA to research. The company now has our defective card and has traded us with a working one. We bought the defective Titan RTX, so it was a “real” retail sample. We just wanted to help NVIDIA troubleshoot the issue, and so the company is now working with it.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge