We already reviewed an individual NVIDIA Titan RTX over here, used first for gaming, overclocking, thermal, power, and acoustic testing. We may look at production workloads later, but that’ll wait. We’re primarily waiting for our go-to applications to add RT and Tensor Core support for 3D art. After replacing our bugged Titan RTX (the one that was clock-locked), we were able to proceed with SLI (NVLink) testing for the dual Titan RTX cards. Keep in mind that NVLink is no different from SLI when using these gaming bridges, aside from increased bandwidth, and so we still rely upon AFR and independent resources.

As a reminder, these cards really aren’t built for the way we’re testing them. You’d want a Titan RTX card as a cheaper alternative to Quadros, but with the memory capacity to handle heavy ML/DL or rendering workloads. For games, that extra (expensive) memory goes unused, thus demeaning the value of the Titan RTX cards in the face of a single 2080 Ti.

This is really just for fun, in all honesty. We’ll look at a theoretical “best” gaming GPU setup today, then talk about what you should buy instead.

NVidia’s support of its multi-GPU technology has followed a tumultuous course over the years. Following a heavy push for adoption (that landed flat with developers), the company shunted its own SLI tech with Pascal, where multi-GPU support was cut-down to two devices concurrently. Even in press briefings, the company acknowledged waning interest and support in multi-GPU, and so the marketing efforts died entirely with Pascal. Come Turing, a renewed interest in creating multiple-purchasers has spurred development effort to coincide with NVLink, a 100GB/s symmetrical interface for the 2080 Ti. On the 2080, this still maintains a 50GB/s bus. It seems that nVidia may be pushing again for multi-GPU, and NVLink could further enable actual performance scaling with 2x RTX 2080 Tis or RTX 2080s (conclusions notwithstanding). Today, we're benchmarking the RTX 2080 Ti with NVLink (two-way), including tests for PCIe 3.0 bandwidth limitations when using x16/x8 or x8/x8 vs. x16/x16. The GTX 1080 Ti in SLI is also featured.

Note that we most recently visited the topic of PCIe bandwidth limitations in this post, featuring two Titan Vs, and must again revisit this topic. We have to determine whether an 8086K and Z370 platform will be sufficient for benchmarking with multi-GPU, i.e. in x8/x8, and so that requires another platform – the 7980XE and X299 DARK that we used to take a top-three world record previously.

We're ramping into GPU testing hard this week, with many tests and plans in the pipe for the impending and now-obvious RTX launch. As we ramp those tests, and continue publishing our various liquid metal tests (corrosion and aging tests), we're still working on following hardware news in the industry.

This week's round-up includes a video-only inclusion of the EVGA iCX2 mislabeling discussion that popped-up on reddit (links are still below), with written summaries of IP theft and breach of trust affecting the silicon manufacturing business, "GTX" 2060 theories, the RTX Hydro Copper and Hybrid cards, Intel's 14nm shortage, and more.

Keeping marketing checked by reality is part of the reason that technical media should exist: Part of the job is to filter out the adjectives and subjective language for consumers and get to the objective truth. Intel’s initial marketing deck contained a slide that suggested their new X-series CPUs could run 3-way or 4-way GPUs for 12K Gaming. Those are their exact words: "12K Gaming," supported by orange demarcation for the X-series, whereas it is implicitly not supported (in the slide) on the K-SKU desktop CPUs. Not to speak of how uncommon that resolution is, this also isn’t a real resolution. Regardless, we’re using this discussion of Intel’s "12K" claims as an opportunity to benchmark two x8 GPUs on a 7700K with two x16 GPUs on a 7900X, with some tests disabling cores and boosting clock. We have also received a statement from Intel to GamersNexus regarding the marketing language.

First of all, we need to define a few things: Intel’s version of 12K is not what you’d normally expect – in fact, it’s actually fewer pixels than 8K, so the naming is strongly misleading. Let’s break this down.

The RX 580, as we learned in the review process, isn’t all that different from its origins in the RX 480. The primary difference is in voltage and frequency afforded to the GPU proper, with other changes manifesting in maturation of the process over the past year of manufacturing. This means most optimizations are relegated to power (when idle – not under load) and frequency headroom. Gains on the new cards are not from anything fancy – just driving more power through under load.

Still, we were curious as to whether AMD’s drivers would permit cross-RX series multi-GPU. We decided to throw an MSI RX 580 Gaming X and MSI RX 480 Gaming X into a configuration to get things close, then see what’d happen.

The short of it is that this works. There is no explicit inhibitor built in to forbid users from running CrossFire with RX 400 and RX 500 series cards, as long as you’re doing 470/570 or 480/580. The GPU is the same, and frequency will just be matched to the slowest card, for the most part.

We think this will be a common use case, too. It makes sense: If you’re a current owner of an RX 480 and have been considering CrossFire (though we didn’t necessarily recommend it in previous content), the RX 580 will make the most sense for a secondary GPU. Well, primary, really – but you get the idea. The RX 400 series cards will see EOL and cease production in short order, if not already, which means that prices will stagnate and then skyrocket. That’s just what retailers do. Buying a 580, then, makes far more sense if dying for a CrossFire configuration, and you could even move the 580 to the top slot for best performance in single-GPU scenarios.

MSI and system integrator CyberPower are selling the new GT83VR Titan SLI notebook, which sells with K-SKU Intel CPUs and dual GTX 1070 or GTX 1080 GPUs. The move away from M-suffixed cards means that these GPUs are effectively identical to their desktop counterparts, with the exception of the GTX 1070's core increase and clock reduction.

That difference, just to quickly clear it away, results in 2048 CUDA cores on the notebook 1070 (vs. 1920 on the desktop) and a baseline clock-rate of 1645MHz on the notebook (1683MHz on the desktop). Despite talk about the 1060, 1070, and 1080 model notebooks, we haven't yet gotten into the SLI models for this generation.

AMD's fanfare surrounding CrossFire with the RX 480s demanded a test of the configuration, and we decided to run the architecturally similar RX 470s through the same ringer. We only have two RX 470s presently in the lab, and they're not the same card – but we'll talk about how that impacts testing in a moment. The cards used are the Sapphire RX 470 Platinum Edition ($200) and the MSI RX 470 Gaming X, tested mostly in DirectX 11 and OpenGL titles, with some DirectX 12 Explicit Multi-GPU testing toward the end.

The benchmark runs a performance analysis of two CrossFire RX 470s versus a single RX 470, single RX 480, CrossFire RX 480s, and the latest GTX cards (1070, 1060). We're looking at framerate and CrossFire power draw here, with no thermal testing. Read our RX 470 review for in-depth thermal and frequency stability analysis (and overclocking).

Just to be clear straight-away: This test was largely conducted under the context of “because we can.” For the full, in-depth GTX 1060 review, check this article. Also note that this test does not make use of the Scalable Link Interface, and so we're throwing scare quotes around “SLI” just for clarity. The GTX 1060s do not have SLI fingers and can only communicate via the PCIe bus, without a bridge, thereby demanding that applications support MDA (Multi-Display Adapter) or LDA Explicit (Linked Display Adapter) to actually leverage both cards. NVidia does not officially support dual GTX 1060s. This was just something we wanted to do. We also do not recommend purchasing two GTX 1060s for use in a single gaming system.

All that stated, this test pairs an MSI GTX 1060 Gaming X with the GTX 1060 Founders Edition card, then pits them vs. a single GTX 1060, 1080, 1070, and RX 480s (+ CF). This is mostly a curiosity and an experiment to learn, not a comprehensive benchmark or product review. Again, that's here.

Ashes supports explicit multi-GPU and has been coded by the developers to take advantage of this DirectX 12 functionality, which would also allow cross-brand video cards to be paired. We already tested that with the 970 and 390X. Testing was done at 1080p and 4K at high settings, mostly. The Multi-GPU toggle was checked for Dx12 testing. We've also listed the results as AVG ms frametimes, just for another means to convey information.

AMD's panoply of RX 480 news announcements teased superior performance to the then-new GTX 1080 when paired in CrossFire. We decided to buy a second RX 480 8GB card for $240, put it into CrossFire with our sample that we reviewed, and validate those claims.

Multi-GPU configurations are tough to benchmark. We need to perform all the same thermal, noise, power, and FPS analysis as with other devices – but special attention must be paid to 1% and 0.1% low frame values, and more attention still paid toward plotting metrics versus time. Frequency, temperature, and fan RPM have some fluctuations that appear with multi-GPU configurations which are only truly visible when plotting versus time, rather than averaging a set of thousands of points of data.

In our performance review of CrossFire RX 480 8GB cards, we test FPS in Mirror's Edge, The Division, GTA V, and more, alongside temperature, noise, and power performance. We understand that thermals, noise, and power are sometimes less exciting to readers than raw FPS output, but would strongly recommend looking into our results for this benchmark – multi-GPU setups put greater emphasis on such testing. Some games show negative scaling, some positive, and some which are nearly unchanged. All of that below.

Our GTX 1070 SLI benchmarking endeavor began with an amusing challenge – one which we've captured well in our forthcoming video: The new SLI bridges are all rigid, and that means cards of various heights cannot easily be accommodated as the bridges only work well with same-height cards. After some failed attempts to hack something together, and after researching the usage of two ribbon cables (don't do this – more below), we ultimately realized that a riser cable would work. It's not ideal, but the latency impact should be minimal and the performance is more-or-less representative of real-world SLI framerates for dual GTX 1070s in SLI.

Definitely a fun challenge. Be sure to subscribe for our video later today.

The GTX 1070 SLI configuration teetered in our test rig, no screws holding the second card, but it worked. We've been told that there aren't any plans for ribbon cable versions of the new High Bandwidth Bridges (“HB Bridge”), so this new generation of Pascal GPUs – if using the HB Bridge – will likely drive users toward same-same video card arrays. This step coincides with other simplifications to the multi-GPU process with the 10-series, like a reduction from triple- and quad-SLI to focus just on two-way SLI. We explain nVidia's decision to do this in our GTX 1080 review and mention it in the GTX 1070 review.

This GTX 1070 SLI benchmark tests the framerate of two GTX 1070s vs. a GTX 1080, 980 Ti, 980, 970, Fury X, R9 390X, and more. We briefly look at power requirements as well, helping to provide a guideline for power supply capacity. The joint cost of two GTX 1070s, if buying the lowest-cost GTX 1070s out there, would be roughly $760 – $380*2. The GTX 1070 scales up to $450 for the Founders Edition and likely for some aftermarket AIB partner cards as well.

Page 1 of 2

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge