Hardware Guides

NVidia’s support of its multi-GPU technology has followed a tumultuous course over the years. Following a heavy push for adoption (that landed flat with developers), the company shunted its own SLI tech with Pascal, where multi-GPU support was cut-down to two devices concurrently. Even in press briefings, the company acknowledged waning interest and support in multi-GPU, and so the marketing efforts died entirely with Pascal. Come Turing, a renewed interest in creating multiple-purchasers has spurred development effort to coincide with NVLink, a 100GB/s symmetrical interface for the 2080 Ti. On the 2080, this still maintains a 50GB/s bus. It seems that nVidia may be pushing again for multi-GPU, and NVLink could further enable actual performance scaling with 2x RTX 2080 Tis or RTX 2080s (conclusions notwithstanding). Today, we're benchmarking the RTX 2080 Ti with NVLink (two-way), including tests for PCIe 3.0 bandwidth limitations when using x16/x8 or x8/x8 vs. x16/x16. The GTX 1080 Ti in SLI is also featured.

Note that we most recently visited the topic of PCIe bandwidth limitations in this post, featuring two Titan Vs, and must again revisit this topic. We have to determine whether an 8086K and Z370 platform will be sufficient for benchmarking with multi-GPU, i.e. in x8/x8, and so that requires another platform – the 7980XE and X299 DARK that we used to take a top-three world record previously.

NVidia’s Turing architecture has entered the public realm, alongside an 83-page whitepaper, and is now ready for technical detailing. We have spoken with several nVidia engineers over the past few weeks, attended the technical editor’s day presentations, and have read through the whitepaper – there’s a lot to get through, so we will be breaking this content into pieces with easily navigable headers.

Turing is a modified Volta at its core, which is a heavily modified Pascal. Core architecture isn’t wholly unrecognizable between Turing and Pascal – you’d be able to figure out that they’re from the same company – but there are substantive changes within the Turing core.

Alongside the question of how frequently liquid metal should be replaced, one of the most common liquid metal-related questions pertains to how safe it is to use with different metals. This includes whether liquid metal is safe to use with bare copper, like you’d find in a laptop, or aluminum, and also includes the staining effect of liquid metal on nickel-plated copper (like on an IHS). This content explores the electromechanical interactions of liquid metal with the three most common heatsink materials, and does so using Thermal Grizzly’s Conductonaut liquid metal. Conductonaut is among the most prevalent on the market, but other options are made of similar compound, like Coollaboratory’s Liquid Ultra.

Conductonaut is a eutectic alloy – it is a mix of gallium, indium, and tin. This is Galinstan, but the individual mixtures of liquid metal have different percentages for each element. We don’t know the exact mixture of Conductonaut, but we do know that it uses gallium, indium, and tin. Most liquid metals use this mixture, just with varying percentages of each element. Gallium typically comprises the majority of the mixture.

We’re at PAX West 2018 for just one day this year (primarily for a discussion panel), but stopped by the Gigabyte booth for a hands-on with the new RTX cards. As with most other manufacturers, these cards aren’t 100% finalized yet, although they do have some near-final cooling designs. The models shown today appear to use the reference PCB design with hand-made elements to the coolers, as partners had limited time to prepare. Gigabyte expects to have custom PCB solutions at a later date.

“How frequently should I replace liquid metal?” is one of the most common questions we get. Liquid metal is applied between the CPU die and IHS to improve thermal conductivity from the silicon, but there hasn’t been much long-term testing on liquid metal endurance versus age. Cracking and drying are some of the most common concerns, leading users to wonder whether liquid metal performance will fall off a cliff at some point. One of our test benches has been running thermal endurance cycling tests for the last year now, since September of 2017, just to see if it’s aged at all.

This is a case study. We are testing with a sample size of one, so consider it an experiment and case study over an all-encompassing test. It is difficult to conduct long-term endurance tests with multiple samples, and would require dozens (or more) of identical systems to really build-out a large database. From that angle, again, please keep in mind that this is a case study of one test bench, with one brand of liquid metal.

The “correct” method for applying thermal paste is still the subject of arguments, despite plenty of articles with testing and hard numbers to back them up. As we mentioned in our Threadripper paste application comparison, the general consensus for smaller desktop CPUs is that, as long as enough paste is applied to cover the IHS, every method is basically the same. The “blob” method works just fine. We have formally tested this for Threadripper (which cares about IHS coverage greatly) and X99 CPUs, but not for smaller desktop SKUs. Today debuts our formal thermal paste quantity testing -- not just method of application, but amount -- and looks specifically at the more common desktop CPUs. We are finally addressing the YouTube-wide comment of “too much” or “too little” paste, likely so prevalent as a result of everyone’s personal exposure to this one specific aspect of PC building.

Again, this isn’t really about whether an “X” or “dot” or thin spread is better (and none is superior, assuming all cover the IHS equally -- it’s just about how easily they achieve that goal). This is about how much quantity matters. See below example:

For today, we’re talking about volt-frequency scalability on our 8086K one more time. This time, coverage includes manual binning of our core, as we already illustrated limitations of the IMC in the overclocking stream. We’ve also already tested the CPU for thermal and acoustic performance when considering liquid metal applications.

The Intel i7-8086K is a binned i7-8700K, so we thought we’d see what bin we got. This testing exhibits simple volt-frequency curves as plotted against Blender and Firestrike stability testing. Note that our stability tests were limited to 30 minutes in an intensive Blender workload. Realistically, this is the most achievable for publication purposes, and 99% of CPUs that pass this test will remain stable. If we were selling these CPUs, maybe like Silicon Lottery, it’d obviously be preferable to test for many hours.

Frequency is the most advertised spec of RAM. As anyone who’s dug a little deeper knows, memory performance depends on timings as well--and not just the primary ones. We found this out the hard way while doing comparative testing for an article on extremely high frequency memory which refused to stabilize. We shelved that article indefinitely, but due to reader interest (thanks, John), we decided to explore memory subtimings in greater depth.

This content hopes to define memory timings and demystify the primary timings, including CAS (CL), tRAS, tRP, tRAS, and tRCD. As we define primary memory timings, we’ll also demonstrate how some memory ratios work (and how they sometimes can operate out of ratio), and how much tertiary and secondary timings (like tRFC) can impact performance. Our goal is to revisit this topic with a secondary and tertiary timings deep-dive, similar to this one.

We got information and advice from several memory and motherboard manufacturers in the course of our research, and we were warned multiple times about the difficulty of tackling this subject. On the one hand, it’s easy to get lost in minutiae, and on the other it’s easy to summarize things incorrectly. As ASUS told us, “you need to take your time on this one.” This is a general introduction, to be followed by another article with more detail on secondary and tertiary timings.

One of our Computex advertisers was CableMod, who are making a new vertical GPU mount that positions the video card farther back in the case, theoretically lowering thermals. We wanted to test this claim properly. It makes logical sense that a card positioned farther from the glass would operate cooler, but we wanted to test to what degree that’s true. Most vertical GPU mounts do fine for open loop cooling, but suffocate air-cooled cards by limiting the gap between the glass to less than an inch or two. The CableMod mount should push cards close to the motherboard, which has other interesting thermal characteristics that we’ll get into today.

We saw several cases at Computex that aim to move to rotating PCIe expansion slots, meaning that some future cases will accommodate GPUs positioned further toward the motherboard. Not all cases are doing this, leaving room for CableMod to compete, but it looks like Thermaltake and Cooler Master are moving this direction.

Manufacturing a single case can cost hundreds of thousands of dollars to design and develop, but the machinery used to make those cases costs millions of dollars. In a recent tour of Lian Li’s case manufacturing facility in Taiwan, we got to see first-hand the advanced and largely autonomous hydraulic presses, laser cutters, automatic shaping machines, and other equipment used to make a case. Some of these tools apply hundreds of thousands of pounds of force to case paneling, upwards of 1 million Newtons, and others will use high voltage to spot-weld pieces to aluminum paneling. Today, we’re walking through the start-to-finish process of how a case is made.

The first steps of case manufacturing at the Lian Li facility is to design the product. Once this process is done, CAD files go to Lian Li’s factory across the street to be turned into a case. In a simplified, canonical view of the manufacturing process, the first step is design, then raw materials and preparation of raw materials, followed by either a laser cutter for basic shapes or a press for tooled punch-outs, then washing, grinding, flattening, welding, and anodizing.

Page 1 of 20

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge