Hardware news coverage has largely followed the RTX 2080 Ti story over the past week, and it's one of dying cards of unknown quantities online. We have been investigating the issue further and have a few leads on what's going on, but are awaiting for some of the dead cards to arrive at our office before proceeding further. We also learned about Pascal stock depletion, something that's been a curious topic when given the slow uptake on RTX.
Further news items include industry discussion on Intel's outsourcing to TSMC, its hiring of former AMD graphics staff, and dealings with 14nm shortages. Only one rumor is present this week, and that's of the nearly confirmed RX 590.
EVGA’s RTX 2070 XC Ultra gave us an opportunity to compare the differences between NVIDIA’s varied RTX 2070 SKUs, including a low-end TU106-400 and a higher-end TU106-400A. The difference between these, we’ve learned, is one of pre-selection for ability to attain higher clocks. The XC Ultra runs significantly higher under Boost behavior than the 2070 Black does, which means that there’s now more to consider in the $70 price gap between the cards than just the cooler. This appears to be one of the tools available to board partners so that they can reach the $500 MSRP floor, but there is a performance cost as a result. With Pascal, the performance cost effectively boiled-down to one predicated on thermal and power headroom, but not necessarily chip quality. Turing is different, and chip quality is now a potential limiter.
In this review of the EVGA RTX 2070 XC Ultra, we’ll also be discussing performance variability between the two 2070 GPU SKUs. These theories should extrapolate out to other NVIDIA cards with these sub-GPU options. Note that we are just going to focus on the 2070s today. If you want to see how we compare the 2070’s value versus Vega or Pascal, check our 2070 review and Vega 56 power mod content pieces.
The real discussion is going to be in overclocking and thermals, as gaming performance typically isn’t too varied intra-GPU. That said, the GPU changes between these two (technically), so that’ll make for an interesting data point.
Intel broke silence this week in response to media reports that its 10nm process "died," denying the claims outright and reaffirming target delivery for 2019. This follows reports emboldened by Semiaccurate of the discontinuation of the current 10nm process development, a site that previously accurately predicted issues with 10nm production. We've also seen plenty of AMD news items this week, including a slumped earnings report, Vega 20 rumors, and RX 590 rumors.
The shows notes are below the video, as always, for those favoring reading.
We’re resurrecting our AMD RX Vega 56 powerplay tables mod to challenge the RTX 2070, a card that competes in an entirely different price class. It’s a lightweight versus heavyweight boxing match, except the lightweight has a gun.
For our Vega 56 card, priced at between $370 and $400, depending on sales, we will be shoving an extra 200W+ of power into the core to attempt to match the RTX 2070’s stock performance. We strongly praised Vega 56 at launch for its easily modded nature, but the card has faced fierce competition from the 1070 Ti and 1070. It was also constantly out of stock or massively overpriced throughout the mining boom, which acted as a death knell for Vega throughout the mining months. With that now dying down and Vega becoming available for normal people again, pricing is competitive and compelling, and nVidia’s own recent fumbles have created an opening in the market.
We will be working with a PowerColor RX Vega 56 Red Dragon card, a 242% power target, and matching it versus an EVGA RTX 2070 Black. The price difference is about $370-$400 vs. $500-$550, depending on where you buy your parts. We are using registry entries to trick the Vega 56 card into a power limit that exceeds the stock maximum of +50%, allowing us to go to +242%. This was done with the help of Buildzoid last year.
One final note: We must warn that we aren’t sure of the long-term impact of running Vega 56 with this much power going through it. If you want to do this yourself, be advised that long-term damage is a possibility for which we cannot account.
After the post-apocalyptic hellscape that was the RTX 2080 launch, NVIDIA is following it up with lessons learned for the RTX 2070 launch. By and large, technical media took issue with the 2080’s price hike without proper introduction to its namesake feature—that’d be “RTX”—which is still unused on the 2070. This time, however, the RTX 2070 launches at a much more tenable price of $500 to $600, putting it at rough price parity with the GTX 1080 hanger-on stock. It becomes easier to overlook missing features (provided the buyer isn’t purchasing for those features) when price and performance parity are achieved with existing products and rendering techniques. This is what the RTX 2070 looks forward to most.
Our EVGA RTX 2070 Black review will focus on gaming benchmarks vs. the GTX 1070, GTX 970, Vega 64, and other cards, as well as in-depth thermal testing and noise testing. We will not be recapping architecture in this content; instead, we recommend you check out our Turing architecture deep-dive from the RTX 2080 launch.
We've been working hard at building our second iteration of the RIPJAY bench, last featured in a livestream where we beat JayzTwoCents' score in TimeSpy Extreme, taking first place worldwide for a two-GPU system. Since then, Jay has beaten our score -- primarily with water and direct AC cooling -- and we have been revamping our setup to fire back at his score. More on that later this week.
In actual news, though, it's still been busy: RAM prices are behaving in a bipolar fashion, bouncing around based on a mix of supply, demand, and manufacturers trying to maintain high per-unit margins. Intel, meanwhile, is still combating limited supply of its now-strained 14nm process, resulting in some chipsets getting stepped-back to 22nm. AMD is also facing shortages for its A320 and B450 chipsets, though this primarily affects China retail. We also received word of several upcoming launches from Intel, AMD, and NVIDIA -- the RTX 2070 and Polaris 30 news (the latter is presently a rumor) being the most interesting.
We always like to modify the reference cards – or “Founders Edition,” by nVidia’s new naming – to determine to what extent a cooler might be holding it back. In this instance, we suspected that the power limitations may be a harder limit than cooling, which is rather sad, as the power delivery on nVidia’s RTX 2080 Ti reference board is world-class.
We recently published a video showing the process, step-by-step, for disassembling the Founders Edition cards (in preparation for water blocks). Following this, we posted another piece wherein we built-up a “Hybrid” cooling version of the card, using a mix of high-RPM fans and a be quiet! Silent Loop 280 CLC for cooling the GPU core on a 2080 Ti FE card. Today, we’re summarizing the results of the mod.
NVidia’s support of its multi-GPU technology has followed a tumultuous course over the years. Following a heavy push for adoption (that landed flat with developers), the company shunted its own SLI tech with Pascal, where multi-GPU support was cut-down to two devices concurrently. Even in press briefings, the company acknowledged waning interest and support in multi-GPU, and so the marketing efforts died entirely with Pascal. Come Turing, a renewed interest in creating multiple-purchasers has spurred development effort to coincide with NVLink, a 100GB/s symmetrical interface for the 2080 Ti. On the 2080, this still maintains a 50GB/s bus. It seems that nVidia may be pushing again for multi-GPU, and NVLink could further enable actual performance scaling with 2x RTX 2080 Tis or RTX 2080s (conclusions notwithstanding). Today, we're benchmarking the RTX 2080 Ti with NVLink (two-way), including tests for PCIe 3.0 bandwidth limitations when using x16/x8 or x8/x8 vs. x16/x16. The GTX 1080 Ti in SLI is also featured.
Note that we most recently visited the topic of PCIe bandwidth limitations in this post, featuring two Titan Vs, and must again revisit this topic. We have to determine whether an 8086K and Z370 platform will be sufficient for benchmarking with multi-GPU, i.e. in x8/x8, and so that requires another platform – the 7980XE and X299 DARK that we used to take a top-three world record previously.
It’s more “RTX OFF” than “RTX ON,” at the moment. The sum of games that include RTX-ready features on launch is 0. The number of tech demos is growing by the hour – the final hours – but tech demos don’t count. It’s impressive to see what nVidia is doing in its “Asteroids” mesh shading and LOD demonstration. It is also impressive to see the Star Wars demo in real-time (although we have no camera manipulation, oddly, which is suspect). Neither of these, unfortunately, are playable games, and the users for whom the RTX cards are presumably made are gamers. You could then argue that nVidia’s Final Fantasy XV benchmark demo, which does feature RTX options, is a “real game” with the technology – except that the demo is utterly, completely untrustworthy, even though it had some of its issues resolved previously (but not all – culling is still dismal).
And so we’re left with RTX OFF at present, which leaves us with a focus primarily upon “normal” games, thermals, noise, overclocking on the RTX 2080 Founders Edition, and rasterization.
We don’t review products based on promises. It’s cool that nVidia wants to push for new features. It was also cool that AMD did with Vega, but we don’t cut slack for features that are unusable by the consumer.
The new nVidia RTX 2080 and RTX 2080 Ti reviews launch today, with cards launching tomorrow, and we have standalone benchmarks going live for both the RTX 2080 Founders Edition and RTX 2080 Ti Founders Edition. Additional reviews of EVGA’s XC Ultra and ASUS’ Strix will go live this week, with an overclocking livestream starting tonight (9/19) at around 6-7PM EST starting time. In the meantime, we’re here to start our review series with the RTX 2080 FE card.
NVidia’s Turing architecture has entered the public realm, alongside an 83-page whitepaper, and is now ready for technical detailing. We have spoken with several nVidia engineers over the past few weeks, attended the technical editor’s day presentations, and have read through the whitepaper – there’s a lot to get through, so we will be breaking this content into pieces with easily navigable headers.
Turing is a modified Volta at its core, which is a heavily modified Pascal. Core architecture isn’t wholly unrecognizable between Turing and Pascal – you’d be able to figure out that they’re from the same company – but there are substantive changes within the Turing core.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.