NVidia’s support of its multi-GPU technology has followed a tumultuous course over the years. Following a heavy push for adoption (that landed flat with developers), the company shunted its own SLI tech with Pascal, where multi-GPU support was cut-down to two devices concurrently. Even in press briefings, the company acknowledged waning interest and support in multi-GPU, and so the marketing efforts died entirely with Pascal. Come Turing, a renewed interest in creating multiple-purchasers has spurred development effort to coincide with NVLink, a 100GB/s symmetrical interface for the 2080 Ti. On the 2080, this still maintains a 50GB/s bus. It seems that nVidia may be pushing again for multi-GPU, and NVLink could further enable actual performance scaling with 2x RTX 2080 Tis or RTX 2080s (conclusions notwithstanding). Today, we're benchmarking the RTX 2080 Ti with NVLink (two-way), including tests for PCIe 3.0 bandwidth limitations when using x16/x8 or x8/x8 vs. x16/x16. The GTX 1080 Ti in SLI is also featured.
Note that we most recently visited the topic of PCIe bandwidth limitations in this post, featuring two Titan Vs, and must again revisit this topic. We have to determine whether an 8086K and Z370 platform will be sufficient for benchmarking with multi-GPU, i.e. in x8/x8, and so that requires another platform – the 7980XE and X299 DARK that we used to take a top-three world record previously.
It’s more “RTX OFF” than “RTX ON,” at the moment. The sum of games that include RTX-ready features on launch is 0. The number of tech demos is growing by the hour – the final hours – but tech demos don’t count. It’s impressive to see what nVidia is doing in its “Asteroids” mesh shading and LOD demonstration. It is also impressive to see the Star Wars demo in real-time (although we have no camera manipulation, oddly, which is suspect). Neither of these, unfortunately, are playable games, and the users for whom the RTX cards are presumably made are gamers. You could then argue that nVidia’s Final Fantasy XV benchmark demo, which does feature RTX options, is a “real game” with the technology – except that the demo is utterly, completely untrustworthy, even though it had some of its issues resolved previously (but not all – culling is still dismal).
And so we’re left with RTX OFF at present, which leaves us with a focus primarily upon “normal” games, thermals, noise, overclocking on the RTX 2080 Founders Edition, and rasterization.
We don’t review products based on promises. It’s cool that nVidia wants to push for new features. It was also cool that AMD did with Vega, but we don’t cut slack for features that are unusable by the consumer.
The new nVidia RTX 2080 and RTX 2080 Ti reviews launch today, with cards launching tomorrow, and we have standalone benchmarks going live for both the RTX 2080 Founders Edition and RTX 2080 Ti Founders Edition. Additional reviews of EVGA’s XC Ultra and ASUS’ Strix will go live this week, with an overclocking livestream starting tonight (9/19) at around 6-7PM EST starting time. In the meantime, we’re here to start our review series with the RTX 2080 FE card.
NVidia’s Turing architecture has entered the public realm, alongside an 83-page whitepaper, and is now ready for technical detailing. We have spoken with several nVidia engineers over the past few weeks, attended the technical editor’s day presentations, and have read through the whitepaper – there’s a lot to get through, so we will be breaking this content into pieces with easily navigable headers.
Turing is a modified Volta at its core, which is a heavily modified Pascal. Core architecture isn’t wholly unrecognizable between Turing and Pascal – you’d be able to figure out that they’re from the same company – but there are substantive changes within the Turing core.
We're ramping into GPU testing hard this week, with many tests and plans in the pipe for the impending and now-obvious RTX launch. As we ramp those tests, and continue publishing our various liquid metal tests (corrosion and aging tests), we're still working on following hardware news in the industry.
This week's round-up includes a video-only inclusion of the EVGA iCX2 mislabeling discussion that popped-up on reddit (links are still below), with written summaries of IP theft and breach of trust affecting the silicon manufacturing business, "GTX" 2060 theories, the RTX Hydro Copper and Hybrid cards, Intel's 14nm shortage, and more.
Alongside the question of how frequently liquid metal should be replaced, one of the most common liquid metal-related questions pertains to how safe it is to use with different metals. This includes whether liquid metal is safe to use with bare copper, like you’d find in a laptop, or aluminum, and also includes the staining effect of liquid metal on nickel-plated copper (like on an IHS). This content explores the electromechanical interactions of liquid metal with the three most common heatsink materials, and does so using Thermal Grizzly’s Conductonaut liquid metal. Conductonaut is among the most prevalent on the market, but other options are made of similar compound, like Coollaboratory’s Liquid Ultra.
Conductonaut is a eutectic alloy – it is a mix of gallium, indium, and tin. This is Galinstan, but the individual mixtures of liquid metal have different percentages for each element. We don’t know the exact mixture of Conductonaut, but we do know that it uses gallium, indium, and tin. Most liquid metals use this mixture, just with varying percentages of each element. Gallium typically comprises the majority of the mixture.
We’re finally nearing completion of our office move-in – as complete as a never-ending project can be, anyway. Our set table is now done, although not shown in today’s video, and the test room is getting filled. That’s the first news item. Work is finally getting produced in the office.
Aside from that, the week has been packed with hardware news. This is one of our densest episodes in recent months, and features a response to the Tom’s Hardware debacle (by Thomas Pabst himself), NVIDIA’s own performance expectations of the RTX 2080, AMD strategic shuffling, and more.
As always, show notes are after the video.
We’re at PAX West 2018 for just one day this year (primarily for a discussion panel), but stopped by the Gigabyte booth for a hands-on with the new RTX cards. As with most other manufacturers, these cards aren’t 100% finalized yet, although they do have some near-final cooling designs. The models shown today appear to use the reference PCB design with hand-made elements to the coolers, as partners had limited time to prepare. Gigabyte expects to have custom PCB solutions at a later date.
“How frequently should I replace liquid metal?” is one of the most common questions we get. Liquid metal is applied between the CPU die and IHS to improve thermal conductivity from the silicon, but there hasn’t been much long-term testing on liquid metal endurance versus age. Cracking and drying are some of the most common concerns, leading users to wonder whether liquid metal performance will fall off a cliff at some point. One of our test benches has been running thermal endurance cycling tests for the last year now, since September of 2017, just to see if it’s aged at all.
This is a case study. We are testing with a sample size of one, so consider it an experiment and case study over an all-encompassing test. It is difficult to conduct long-term endurance tests with multiple samples, and would require dozens (or more) of identical systems to really build-out a large database. From that angle, again, please keep in mind that this is a case study of one test bench, with one brand of liquid metal.
Other than news that our move into an office is nearly complete -- and that is big news, at least, for us -- the industry has been largely focused on GPUs for the past few weeks. NVidia's remaining 10-series GPU inventory has been purged down-channel to board partners, who are now working to drop 10-series video card prices in fire sales that lead into the RTX 20-series launch. We've also heard of Spectre and Meltdown again (it's been a while), with Intel pushing more microcode updates to assist in mitigating attack vectors. Those updates came with a brief "no benchmarks" clause, but that seems to have been addressed in the time since.
Separately: We'll be at PAX West this weekend for one day (Friday), and will be joining Corsair and PC World on a PC gaming panel at 7:00PM in the Sandworm Theater. Learn more here.
The show notes and article are below the video embed, if you prefer reading.
Intel has provided GamersNexus with a statement that addresses concerns pertaining to a new “benchmarking clause.” This comes after Bruce Perens published a story entitled “Intel Publishes Microcode Security Patches, No Benchmarking Or Comparison Allowed!” As one might expect, the story has exploded online since this publication.
The crux of the issue comes down to Intel’s Spectre and Meltdown microcode updates, which aim to mitigate vulnerabilities and attack vectors that were exposed for nearly all CPUs (since the 90s) late last year. GamersNexus previously worked to interview some of the expert researchers who discovered Meltdown and Spectre, all published in this article. We’d recommend the read for anyone not up-to-date on the attack vectors.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.