NVidia’s support of its multi-GPU technology has followed a tumultuous course over the years. Following a heavy push for adoption (that landed flat with developers), the company shunted its own SLI tech with Pascal, where multi-GPU support was cut-down to two devices concurrently. Even in press briefings, the company acknowledged waning interest and support in multi-GPU, and so the marketing efforts died entirely with Pascal. Come Turing, a renewed interest in creating multiple-purchasers has spurred development effort to coincide with NVLink, a 100GB/s symmetrical interface for the 2080 Ti. On the 2080, this still maintains a 50GB/s bus. It seems that nVidia may be pushing again for multi-GPU, and NVLink could further enable actual performance scaling with 2x RTX 2080 Tis or RTX 2080s (conclusions notwithstanding). Today, we're benchmarking the RTX 2080 Ti with NVLink (two-way), including tests for PCIe 3.0 bandwidth limitations when using x16/x8 or x8/x8 vs. x16/x16. The GTX 1080 Ti in SLI is also featured.

Note that we most recently visited the topic of PCIe bandwidth limitations in this post, featuring two Titan Vs, and must again revisit this topic. We have to determine whether an 8086K and Z370 platform will be sufficient for benchmarking with multi-GPU, i.e. in x8/x8, and so that requires another platform – the 7980XE and X299 DARK that we used to take a top-three world record previously.

It’s more “RTX OFF” than “RTX ON,” at the moment. The sum of games that include RTX-ready features on launch is 0. The number of tech demos is growing by the hour – the final hours – but tech demos don’t count. It’s impressive to see what nVidia is doing in its “Asteroids” mesh shading and LOD demonstration. It is also impressive to see the Star Wars demo in real-time (although we have no camera manipulation, oddly, which is suspect). Neither of these, unfortunately, are playable games, and the users for whom the RTX cards are presumably made are gamers. You could then argue that nVidia’s Final Fantasy XV benchmark demo, which does feature RTX options, is a “real game” with the technology – except that the demo is utterly, completely untrustworthy, even though it had some of its issues resolved previously (but not all – culling is still dismal).

And so we’re left with RTX OFF at present, which leaves us with a focus primarily upon “normal” games, thermals, noise, overclocking on the RTX 2080 Founders Edition, and rasterization.

We don’t review products based on promises. It’s cool that nVidia wants to push for new features. It was also cool that AMD did with Vega, but we don’t cut slack for features that are unusable by the consumer.

The new nVidia RTX 2080 and RTX 2080 Ti reviews launch today, with cards launching tomorrow, and we have standalone benchmarks going live for both the RTX 2080 Founders Edition and RTX 2080 Ti Founders Edition. Additional reviews of EVGA’s XC Ultra and ASUS’ Strix will go live this week, with an overclocking livestream starting tonight (9/19) at around 6-7PM EST starting time. In the meantime, we’re here to start our review series with the RTX 2080 FE card.

NVidia’s Turing architecture has entered the public realm, alongside an 83-page whitepaper, and is now ready for technical detailing. We have spoken with several nVidia engineers over the past few weeks, attended the technical editor’s day presentations, and have read through the whitepaper – there’s a lot to get through, so we will be breaking this content into pieces with easily navigable headers.

Turing is a modified Volta at its core, which is a heavily modified Pascal. Core architecture isn’t wholly unrecognizable between Turing and Pascal – you’d be able to figure out that they’re from the same company – but there are substantive changes within the Turing core.

We're ramping into GPU testing hard this week, with many tests and plans in the pipe for the impending and now-obvious RTX launch. As we ramp those tests, and continue publishing our various liquid metal tests (corrosion and aging tests), we're still working on following hardware news in the industry.

This week's round-up includes a video-only inclusion of the EVGA iCX2 mislabeling discussion that popped-up on reddit (links are still below), with written summaries of IP theft and breach of trust affecting the silicon manufacturing business, "GTX" 2060 theories, the RTX Hydro Copper and Hybrid cards, Intel's 14nm shortage, and more.

We’re finally nearing completion of our office move-in – as complete as a never-ending project can be, anyway. Our set table is now done, although not shown in today’s video, and the test room is getting filled. That’s the first news item. Work is finally getting produced in the office.

Aside from that, the week has been packed with hardware news. This is one of our densest episodes in recent months, and features a response to the Tom’s Hardware debacle (by Thomas Pabst himself), NVIDIA’s own performance expectations of the RTX 2080, AMD strategic shuffling, and more.

As always, show notes are after the video.

We’re at PAX West 2018 for just one day this year (primarily for a discussion panel), but stopped by the Gigabyte booth for a hands-on with the new RTX cards. As with most other manufacturers, these cards aren’t 100% finalized yet, although they do have some near-final cooling designs. The models shown today appear to use the reference PCB design with hand-made elements to the coolers, as partners had limited time to prepare. Gigabyte expects to have custom PCB solutions at a later date.

We had an opportunity to disassemble multiple EVGA RTX video cards, including the EVGA RTX 2080 and RTX 2080 Ti, the latter featuring assistance from Der8auer of Caseking’s booth. Our coverage is still going live as we edit, render, and upload, but the immediate news item pertains to die size.

Update: Added a correction for SM / CUDA Core numbers, now that full details have been leaked.

NVIDIA announced its new Turing video cards for gaming today, including the RTX 2080 Ti, RTX 2080, and RTX 2070. The cards move forward with an upgraded-but-familiar Volta architecture, with some changes to the SMs and memory. The new RTX 2080 and 2080 Ti ship with reference cards first, and partner cards largely at the same time (with some more advanced models coming 1+ month later), depending on which partner it is. The board partners did not receive pricing or even card naming until around the same time as media, so expect delays in custom solutions. Note that we were originally hearing a 1-3 month latency on partner cards, but that looks to be only for advanced models that are just now entering production. Most tri-fan models should come available on the same date.

Another major point of consideration is NVIDIA's decision to use a dual-axial reference card, eliminating much of the value of partner cards at the low-end. Moving away from blower reference cards and toward dual-fan cards will most immediately impact board partners, something that could lead to a slow crawl of NVIDIA expanding its direct-to-consumer sales and bypassing partners. The RTX 2080 Ti will be priced at $1200 and will launch on September 20, with the 2080 at $800 (and September 20), and the 2070 at $600 (TBD release date).

It’s hard to intentionally get scammed – to set out there and really try to get ripped-off, outside of maybe paying AT&T or Spectrum for internet. We still tried, though. We bought this GTX 1050 “1GB” card that was listed on eBay. At least, that’s what it was called. The card was $80 and was advertised as a new GTX 1050, and even came with this definitely-not-questionable CD and unbranded brown box. Opening up GPU-Z, it even thinks this is a GTX 1050, and knows it has 1GB of RAM. Today, we’ll benchmark the card and explain how this scam works.

We’ll keep this one short; despite benchmarking a full suite of games, you really sort of get the point after 3-4 charts. The more important thing – the only important thing, really – is what’s under the cooler. We’ll take the card apart after a couple of charts and talk about what’s really in there, because it sure doesn’t behave like a GTX 1050 would (not even one with “1GB” of VRAM, which doesn’t exist).

A quick note: There is no officially sanctioned or created GTX 1050 “1GB” card, and so the usual board partners (and nVidia) have no part in this. This is sold as an unbranded, brown box video card on eBay.

This is something we haven’t seen before. NVidia has taken a relatively successful card, the GT 1030, and has implanted DDR4 in place of GDDR5. It’s actually getting system memory on it, which is a tremendous downgrade. The memory bandwidth reduction is several-fold, dropping from 48GB/s to about 16GB/s with DDR4, but the part that’s truly wrong is that they used the same product name.

The GT 1030 has always been an interesting product, and that’s only true because of the mining boom and GPU scarcity issues of earlier this year. Typically, the GT 1030 – or similarly ultra-low-end cards – would not get our recommendation, as a GTX 1050 or RX 550 would make more sense and be close in price. Earlier this year, even GTX 1050s and RX 550s had evaporated, leaving only overpriced GT 1030 GDDR5 cards (that we were somewhat OK with recommending). Fortunately, performance was decent. Was. Before the DDR4 surgery.

It’s time to benchmark the GT 1030 versus the GT 1030 Bad Edition, which ships with DDR4 instead of GDDR5, but has the same name as the original product. In a previous rant, we railed against these choices because it misleads consumers – whether intentionally or unintentionally – into purchasing a product that doesn’t reflect the benchmarks. If someone looks up GT 1030 benchmarks, they’ll find our GDDR5 version tests, and those results are wildly different from the similarly priced GT 1030 DDR4 card’s performance. On average, particularly on Newegg, there is about a $10 difference between the two cards.

The GT 1030 with DDR4 is one of the most egregious missteps we’ve seen when it comes to product marketing. NVidia has made a lot of great products in the past year – and we’ve even recommended the GT 1030 GDDR5 card in some instances, which is rare for us – but the DDR4 version under the same name was a mistake.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge