The RTX 2080 Ti failures aren’t as widespread as they might have seemed from initial reddit threads, but they are absolutely real. When discussing internally whether we thought the issue of artifacting and dying RTX cards had been blown out of proportion by the internet, we had two frames of mind: On one side, the level of attention did seem disproportionate to the size of the issue, particularly as RMA rates are within the norm. Partners are still often under 1% and retailers are under 3.5%, which is standard. The other frame of mind is that, actually, nothing was blown out of proportion for people who spent $1250 and received a brick in return. For those affected buyers, the artifacting is absolutely a real issue, and it deserves real attention.
This content marks the closing of a storyline for us. We published previous videos detailing a few of the failures on our viewers’ cards (borrowed by GN on loan), including an unrelated issue of a 1350MHz lock and BSOD issue. We also tested cards in our livestream to show what the artifacting looks like, seen here. Today, we’re mostly looking at thermals, firmware, the OS, downclocking impact, and finding a conclusion of what the problem isn’t (rather than what it 100% is).
With over a dozen cards mailed in to us, we had a lot to sort through over the past week. This issue certainly exists in a very real way for those who spent $1200+ on an unusable video card, but it isn’t affecting everyone. It’s far from “widespread,” fortunately, and our present understanding is that RMA rates remain within reason for most of the industry. That said, NVIDIA’s response times to some RMA requests have been slow, from what our viewers have expressed, and replacements can take upwards of a month given supply constraints in some regions. That’s a problem.
Respected manufacturers of silence-focused PC cases like be quiet! and Fractal Design use a number of tricks to keep noise levels down. These often include specially designed fans, thick pads of noise-damping foam, sealed front panels, and elaborately baffled vents. We tend to prefer high airflow to silence when given a choice, and it usually is presented that way: as a choice. The reality is that it doesn’t have to be a choice, and that an airflow-oriented case can, with minor work, achieve equivalent noise levels to a silence-focused case (while offering better thermals).
Our testing tends to reinforce that idea of a choice: our baseline results are measured with the case fans at maximum speed and therefore maximum noise, making cases like the SilverStone RL06 sound like jet engines. The baseline torture tests are good for consistency, showcasing maximum performance, and for highlighting the performance differences between cases, but they don’t represent how most users run their PCs for 24/7 usage. Instead, most users would likely turn down the fans to an acceptable noise level--maybe even the same level as intentionally quiet cases like the Silent Base 601.
Our thesis for this benchmark paper proposes that fans can be turned down sufficiently to equate noise levels of a silence-focused case, but while still achieving superior thermal performance. The candidates chosen as a case study were the Silverstone Redline 06 and the be quiet! Silent Base 601. The RL06 is one of the best-ventilated and noisiest cases we’ve tested in the past couple of years, while the SB601 is silence-focused with restricted airflow.
One variable that we aren’t equipped to measure is the type of noise. Volume is one thing, but the frequency and subjective annoying-ness matter too. For the most part, noise damping foam addresses concerns of high-frequency whines and shorter wavelengths, while thicker paneling addresses low-frequency hums and longer wavelengths. For today’s testing, we are entirely focusing on noise level at 20” and testing thermals at normalized volumes.
Intel’s TDP has long been questioned, but this particular generation put the 95W TDP under fire as users noticed media outlets measuring power consumption at well over 100W on most boards. It isn’t uncommon to see the 9900K at 150W or more in some AVX workloads, like Blender, thus far-and-away exceeding the 95W number. Aside from TDP being an imperfect specification for power, there’s also a lot that isn’t understood about it – including by motherboard manufacturers, apparently. All manufacturers are exceeding Intel guidance for the Turbo boosting duration in some way, which is causing the uncharacteristically high power consumption that produces unfairly advantaged performance results. The other end of this is that the 9900K looks much hotter in some tests.
We previously deep-dived on MCE (Multi-Core Enhancement) practices with the 8700K, revealing the performance variance that can occur when motherboard makers “cheat” results by boosting CPUs out of spec. MCE has become less of a problem with Z390 – namely because it is now disabled by default on all boards we’ve tested – but boosted BCLKs are the new issue.
If you think Cinebench is a reliable benchmark, we’ve got a histogram of all of our test results for the Intel i9-9900K at presumably stock settings:
(Yes, the scale starts at non-0 -- given a range of results of 1976 to 2300, we had to zoom-in on the axis for a better histogram view)
The scale is shrunken and non-0 as the results are so tightly clustered, but you can still see that we’re ranging from 1970 cb marks to 2300 cb marks, which is a massive range. That’s the difference between a heavily overclocked R7 2700 and an overclocked 7900X, except this is all on a single CPU. The only difference is that we used 5 different motherboards for these tests, along with a mix of auto, XMP, and MCE settings. The discussion today focuses on when it is considered “cheating” to modify CPU settings via BIOS without the user’s awareness of those changes. The most common change is to the base clock, where BIOS might report a value of 100.00, but actually produce a value of 100.8 or 100.9 on the CPU. This functionally pre-overclocks it, but does so in a way that is hard for most users to ever notice.
We’re resurrecting our AMD RX Vega 56 powerplay tables mod to challenge the RTX 2070, a card that competes in an entirely different price class. It’s a lightweight versus heavyweight boxing match, except the lightweight has a gun.
For our Vega 56 card, priced at between $370 and $400, depending on sales, we will be shoving an extra 200W+ of power into the core to attempt to match the RTX 2070’s stock performance. We strongly praised Vega 56 at launch for its easily modded nature, but the card has faced fierce competition from the 1070 Ti and 1070. It was also constantly out of stock or massively overpriced throughout the mining boom, which acted as a death knell for Vega throughout the mining months. With that now dying down and Vega becoming available for normal people again, pricing is competitive and compelling, and nVidia’s own recent fumbles have created an opening in the market.
We will be working with a PowerColor RX Vega 56 Red Dragon card, a 242% power target, and matching it versus an EVGA RTX 2070 Black. The price difference is about $370-$400 vs. $500-$550, depending on where you buy your parts. We are using registry entries to trick the Vega 56 card into a power limit that exceeds the stock maximum of +50%, allowing us to go to +242%. This was done with the help of Buildzoid last year.
One final note: We must warn that we aren’t sure of the long-term impact of running Vega 56 with this much power going through it. If you want to do this yourself, be advised that long-term damage is a possibility for which we cannot account.
After our launch-day investigation into delidding the 9900K and finding its shortcomings, we’ve been working on a follow-up involving lapping the inside of the IHS and applying liquid metal to close the story on improvement potential with the delid process. We’re also returning to bring everyone back to reality on delidding the 9900K, because it’s not as easy as it may look from what you’re seeing online.
We already know that it’s possible to see performance improvement, based on our previous content and Roman’s own testing, but we’ve also said that Intel’s solder is an improvement over its previous Dow Corning paste. Considering that, in our testing, high-end Hydronaut paste performs nearing the solder, that’s good news when compared to the older thermal compound. Intel also needed to make that change for more thermal headroom, so everyone benefits – but it is possible to outperform it.
We always like to modify the reference cards – or “Founders Edition,” by nVidia’s new naming – to determine to what extent a cooler might be holding it back. In this instance, we suspected that the power limitations may be a harder limit than cooling, which is rather sad, as the power delivery on nVidia’s RTX 2080 Ti reference board is world-class.
We recently published a video showing the process, step-by-step, for disassembling the Founders Edition cards (in preparation for water blocks). Following this, we posted another piece wherein we built-up a “Hybrid” cooling version of the card, using a mix of high-RPM fans and a be quiet! Silent Loop 280 CLC for cooling the GPU core on a 2080 Ti FE card. Today, we’re summarizing the results of the mod.
NVidia’s support of its multi-GPU technology has followed a tumultuous course over the years. Following a heavy push for adoption (that landed flat with developers), the company shunted its own SLI tech with Pascal, where multi-GPU support was cut-down to two devices concurrently. Even in press briefings, the company acknowledged waning interest and support in multi-GPU, and so the marketing efforts died entirely with Pascal. Come Turing, a renewed interest in creating multiple-purchasers has spurred development effort to coincide with NVLink, a 100GB/s symmetrical interface for the 2080 Ti. On the 2080, this still maintains a 50GB/s bus. It seems that nVidia may be pushing again for multi-GPU, and NVLink could further enable actual performance scaling with 2x RTX 2080 Tis or RTX 2080s (conclusions notwithstanding). Today, we're benchmarking the RTX 2080 Ti with NVLink (two-way), including tests for PCIe 3.0 bandwidth limitations when using x16/x8 or x8/x8 vs. x16/x16. The GTX 1080 Ti in SLI is also featured.
Note that we most recently visited the topic of PCIe bandwidth limitations in this post, featuring two Titan Vs, and must again revisit this topic. We have to determine whether an 8086K and Z370 platform will be sufficient for benchmarking with multi-GPU, i.e. in x8/x8, and so that requires another platform – the 7980XE and X299 DARK that we used to take a top-three world record previously.
NVidia’s Turing architecture has entered the public realm, alongside an 83-page whitepaper, and is now ready for technical detailing. We have spoken with several nVidia engineers over the past few weeks, attended the technical editor’s day presentations, and have read through the whitepaper – there’s a lot to get through, so we will be breaking this content into pieces with easily navigable headers.
Turing is a modified Volta at its core, which is a heavily modified Pascal. Core architecture isn’t wholly unrecognizable between Turing and Pascal – you’d be able to figure out that they’re from the same company – but there are substantive changes within the Turing core.
Alongside the question of how frequently liquid metal should be replaced, one of the most common liquid metal-related questions pertains to how safe it is to use with different metals. This includes whether liquid metal is safe to use with bare copper, like you’d find in a laptop, or aluminum, and also includes the staining effect of liquid metal on nickel-plated copper (like on an IHS). This content explores the electromechanical interactions of liquid metal with the three most common heatsink materials, and does so using Thermal Grizzly’s Conductonaut liquid metal. Conductonaut is among the most prevalent on the market, but other options are made of similar compound, like Coollaboratory’s Liquid Ultra.
Conductonaut is a eutectic alloy – it is a mix of gallium, indium, and tin. This is Galinstan, but the individual mixtures of liquid metal have different percentages for each element. We don’t know the exact mixture of Conductonaut, but we do know that it uses gallium, indium, and tin. Most liquid metals use this mixture, just with varying percentages of each element. Gallium typically comprises the majority of the mixture.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.