Hardware Guides

Since our delid collaboration with Bitwit, we’ve been considering expanding VRM temperature testing on the ASUS Rampage VI Extreme to determine at what point the VRM needs direct cooling. This expanded into determining when it’s even reasonable to expect the stock heatsink to be capable of handling the 7980XE’s overclocked heat load: We are seeking to find at what point we tip into territory of being too power-hungry to reasonably operate without a fan directly over the heatsink.

This VRM thermal benchmark specifically looks at the ASUS Rampage VI Extreme motherboard, which uses one of the better X299 heatsinks for its IR3555 60A power stages. The IR3555 has an internal temperature sensor, which ASUS taps into for a safety throttle in EFI. As we understand it, the stock configuration sets a VRM throttle temperature of 120C – we believe this is internal temperature, though the diode could also be placed between the FETs, in which case the internal temperatures would be higher.

Tripping VRM overtemperature isn’t something we do too often, but it happened when working on Bitwit Kyle’s 7980XE. We’re working on a “collab” with Kyle, as the cool kids call it, and delidded an i9-7980XE for Kyle’s upcoming $10,000 PC build. The delidded CPU underwent myriad thermal and power tests, including similar testing to our previous i9-7980XE delid & 7900X “thermal issues” content pieces. We also benchmarked sealant vs. no sealant (silicone adhesive vs. nothing), as all of our previous tests have been conducted without resealing the delidded CPUs – we just rest the IHS atop the CPU, then clamp it under the socket. For Kyle’s CPU, we’re going to be shipping it across the States, so that means it needs to not leak liquid metal everywhere. Part of this is resolved with nail polish on the SMDs, but the sealant – supposing no major thermal detriment – should also help.

Tripping overtemperature is probably the most unexpected side of our journey on this project. We figured we’d publish some data to demonstrate an overtemperature trip, and what happens when the VRMs exceed safe thermals, but the CPU is technically still under TjMax.

Let’s start with the VRM stuff first: This is a complete sideshoot discussion. We might expand it into a separate content piece with more testing, but we wanted to talk through some of the basics first. This is primarily observational data, at this point, though it was logged.

AMD’s High-Bandwidth Cache Controller protocol is one of the keystones to the Vega architecture, marked by RTG lead Raja Koduri as a personal favorite feature of Vega, and highlighted in previous marketing materials as offering a potential 50% uplift in average FPS when in VRAM-constrained scenarios. With a few driver revisions now behind us, we’re revisiting our Vega 56 hybrid card to benchmark HBCC in A/B fashion, testing in memory-constrained scenarios to determine efficacy in real gaming workloads.

The Windows 10 Fall Creators Update (FCU) has reportedly provided performance uplift under specific usage scenarios, most of which center around GPU-bound scenarios with Vega 56 or similar GPUs. We know with relative certainty that FCU has improved performance stability and frametime consistency with adaptive synchronization technologies – Gsync and FreeSync, mostly – and that there may be general GPU-bound performance uplift. Some of this could come down to driver hooks and implementation in Windows, some of it could be GPU or arch-specific. What we haven’t seen much of is CPU-bound tests, attempting to isolate the CPU as the DUT for benchmarking.

These tests look at AMD Ryzen R7 1700 (stock) performance in Windows 10 Creator’s Update (build 1703, ending in 608) versus Windows 10 Fall Creators Update. Our testing can only speak for our testing, as always, and we cannot reasonably draw conclusions across the hardware stack with these benchmarks. The tests are representative of the R7 1700 in CPU-bound scenarios, created by using a GTX 1080 Ti FTW3. Because this is a 1080 Ti FTW3, we have two additional considerations for possible performance uplift (neither of which will be represented herein):

  • - As an nVidia GPU, it is possible that driver/OS behavior will be different than with an AMD GPU
  • - As a 1080 Ti FTW3, it is possible and likely that GPU-bound performance – which we aren’t testing – would exhibit uplift where this testing does not

Our results are not conclusive of the entirety of FCU, and cannot be used to draw wide-reaching conclusions about multiple hardware configurations. Our objective is to start pinpointing performance uplift, and from what combination of components that uplift can be derived. Most reports we have seen have spotted uplift with 1070 or Vega 56 GPUs, which would indicate GPU-bound performance increases (particularly because said reports show bigger gains at higher resolutions). We also cannot yet speak to performance change on Intel CPUs.

Our newest video leverages years of data to make a point about the case industry: Thermal testing isn't just to find a potential item of nitpicking or discussion -- it has actual ramifications in frequency response, power consumption/leakage, and even gaming performance. The current trend of case design has frighteningly spiraled into design trends that are actively worsening performance of systems. This is a regular cycle, to some extent, where the industry experiments with new design elements and trends -- like tempered glass and RGB lights -- and then culls the worst of the implementations. It's time for the industry to make its scheduled, pendulous swing back toward performance, though, and better accommodate thermals that prevent frequency decay on modern GPUs (which are sensitive to temperature swings).

This is a video-only format, for today. Although the content starts with a joke, the video makes use of charts from the past year or two of case testing that we've done, highlighting the most egregious instances of a case impacting performance of the entire system. We hope that the case manufacturers consider thermals with greater importance moving forward. The video makes the point, but also highlights that resolving poor case design with faster fans will negate any "silent" advantage that a case claims to offer. Find all of that below:

This testing kicked-off because we questioned the validity of some cooler testing results that we saw online. We previously tested two mostly identical Noctua air coolers against one another on Threadripper – one cooler had a TR4-sized plate, the other had an AM-sized plate – and saw differences upwards of 10 degrees Celsius. That said, until now, we hadn’t tested those Threadripper-specific CPU coolers versus liquid coolers, specifically including CLCs/AIOs with large coldplates.

The Enermax Liqtech 240 TR4 closed-loop liquid cooler arrived recently, marking the arrival of our first large coldplate liquid cooler for Threadripper. The Enermax Liqtech 240 TR4 unit will make for a more suitable air vs. liquid comparison versus the Noctua NH-U14S TR4 unit and, although liquid is objectively better at moving heat around, there’s still a major argument on the front of fans and noise. Our testing includes the usual flat-out performance test and 40dBA noise-normalized benchmarking, which matches the NH-U14S, NH-U12S, NZXT Kraken X62 (small coldplate), and Enermax Liqtech 240 at 40dBA for each.

This test will benchmark the Noctua NH-U14S TR4-SP3 and NH-U12S TR4-SP3 air coolers versus the Enermax Liqtech 240 TR4 & NZXT Kraken X62.

The units tested for today include:

Our review of Cooler Master’s H500P primarily highlighted the distinct cooling limitation of a case which has been both implicitly and explicitly marketed as “High Airflow.” The case offered decidedly low airflow, a byproduct of covering the vast majority of the fan – the selling point of the case – with an easily removed piece of clear plastic. In initial testing, we removed the case’s front panel for a closer look at thermals without obstructions, finding a reduction in CPU temperature of ~12~13 degrees Celsius. That gave a better idea for where the H500P could have performed, had the case not been suffocated by design, and started giving us ideas for mesh mods.

The mod is shown start-to-finish in the below video, but it’s all fairly trivial: Time to build was less than 30 minutes, with the next few hours spent on testing. The acrylic top and front panels are held in by double-sided tape, but that tape’s not strong enough to resist a light, sheer force. The panel separates mostly instantly when pressed on, with the rest of the tape removed by opposing presses down the paneling.

Cooler Master H500P Radiator Placement Guide

By Published October 13, 2017 at 3:00 pm

Radiator placement testing should be done on a per-case basis, not applied globally as a universal “X position is always better.” There are general trends that emerge, like front-mounted radiators generally resulting in lower CPU thermals for mesh-covered cases, but those do not persist to every case (see: In Win 303). The H500P is the first case for which we’ve gone out of the way to specifically explore radiator placement “optimization,” and we’ve also added in some best fan placement positions for the case. Radiator placement benchmarks the top versus front orientations, with push vs. pull setups tested in conjunction with Cooler Master’s 200mm fans.

Being that the selling point of the case is its 200mm fans – or one of the major ones, anyway – most of our configurations for both air and liquid attempt to utilize the fans. Some remove them, for academic reasons, but most keep the units mounted.

Our standard test bench will be listed below, but note that we are using the EVGA CLC 240 liquid cooler for radiator placement tests, rather than the MSI air cooler. The tests maximize the pump and fan RPMs, as we care only about the peak-to-peak delta in performance, not the noise levels. Noise levels are about 50-55dBA, roughly speaking, with this setup – not really tenable.

For a recap of our previous Cooler Master H500P results, check our review article and thermal testing section.

Following-up our tear-down of the ASUS ROG Strix Vega 64 graphics card, Buildzoid of Actually Hardcore Overclocking now visits the PCB for an in-depth VRM & PCB analysis. The big question was whether ASUS could reasonably outdo AMD's reference design, which is shockingly good for a card with such a bad cooler. "Reasonably," in this sentence, means "within reasonable cost" -- there's not much price-to-performance headroom with Vega, so any custom cards will have to keep MSRP as low as possible while still iterating on the cooler.

The PCB & VRM analysis is below, but we're still on hold for performance testing. As of right now, we are waiting on ASUS to finalize its VBIOS for best compatibility with AMD's drivers. It seems that there is some more discussion between AIB partners and AMD for this generation, which is introducing a bit of latency on launches. For now, here's the PCB analysis -- timestamps are on the left-side of the video:

We’re testing gaming while streaming on the R5 1500X & i5-8400 today, both CPUs that cost the same (MSRP is about $190) and appeal to similar markets. Difficulties stemming from stream benchmarking make it functionally impossible to standardize. CPU changes drastically impact performance during our streaming + gaming benchmarks, which means that each CPU test falls closer to a head-to-head than an overall benchmark. Moving between R5s and R7s, for instance, completely changes the settings required to produce a playable game + stream experience – and that’s good. That’s what we want. The fact that settings have to be tuned nearly on a per-tier basis means that we’re min-maxing what the CPUs can give us, and that’s what a user would do. Creating what is effectively a synthetic test is useful for outright component comparison, but loses resolution as a viable test candidate.

The trouble comes with lowering the bar: As lower-end CPUs are accommodated and tested for, higher-end components perform at lower-than-maximum throughput, but are capped in benchmark measurements. It is impossible, for example, to encode greater than 100% of frames to stream. That will always be a limitation. At this point, you either declare the CPU as functional for that type of encoding, or you constrict performance with heavier duty encoding workloads.

H264 ranges from Ultrafast to Slowest settings, with facets in between identified as Superfast, Veryfast, Faster, Fast, and Medium. As encoding speed approaches the Slow settings, quality enters into “placebo” territory. Quality at some point becomes indistinguishable from faster encoding settings, despite significantly more strain placed on the processor. The goal of the streamer is to achieve a constant framerate output – whether that’s 30FPS or 60FPS – while also maintaining a playable player-side framerate. We test both halves of the equation in our streaming benchmarks, looking at encode output and player output with equal discernment.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge