Hardware Guides

This testing kicked-off because we questioned the validity of some cooler testing results that we saw online. We previously tested two mostly identical Noctua air coolers against one another on Threadripper – one cooler had a TR4-sized plate, the other had an AM-sized plate – and saw differences upwards of 10 degrees Celsius. That said, until now, we hadn’t tested those Threadripper-specific CPU coolers versus liquid coolers, specifically including CLCs/AIOs with large coldplates.

The Enermax Liqtech 240 TR4 closed-loop liquid cooler arrived recently, marking the arrival of our first large coldplate liquid cooler for Threadripper. The Enermax Liqtech 240 TR4 unit will make for a more suitable air vs. liquid comparison versus the Noctua NH-U14S TR4 unit and, although liquid is objectively better at moving heat around, there’s still a major argument on the front of fans and noise. Our testing includes the usual flat-out performance test and 40dBA noise-normalized benchmarking, which matches the NH-U14S, NH-U12S, NZXT Kraken X62 (small coldplate), and Enermax Liqtech 240 at 40dBA for each.

This test will benchmark the Noctua NH-U14S TR4-SP3 and NH-U12S TR4-SP3 air coolers versus the Enermax Liqtech 240 TR4 & NZXT Kraken X62.

The units tested for today include:

Our review of Cooler Master’s H500P primarily highlighted the distinct cooling limitation of a case which has been both implicitly and explicitly marketed as “High Airflow.” The case offered decidedly low airflow, a byproduct of covering the vast majority of the fan – the selling point of the case – with an easily removed piece of clear plastic. In initial testing, we removed the case’s front panel for a closer look at thermals without obstructions, finding a reduction in CPU temperature of ~12~13 degrees Celsius. That gave a better idea for where the H500P could have performed, had the case not been suffocated by design, and started giving us ideas for mesh mods.

The mod is shown start-to-finish in the below video, but it’s all fairly trivial: Time to build was less than 30 minutes, with the next few hours spent on testing. The acrylic top and front panels are held in by double-sided tape, but that tape’s not strong enough to resist a light, sheer force. The panel separates mostly instantly when pressed on, with the rest of the tape removed by opposing presses down the paneling.

Cooler Master H500P Radiator Placement Guide

By Published October 13, 2017 at 3:00 pm

Radiator placement testing should be done on a per-case basis, not applied globally as a universal “X position is always better.” There are general trends that emerge, like front-mounted radiators generally resulting in lower CPU thermals for mesh-covered cases, but those do not persist to every case (see: In Win 303). The H500P is the first case for which we’ve gone out of the way to specifically explore radiator placement “optimization,” and we’ve also added in some best fan placement positions for the case. Radiator placement benchmarks the top versus front orientations, with push vs. pull setups tested in conjunction with Cooler Master’s 200mm fans.

Being that the selling point of the case is its 200mm fans – or one of the major ones, anyway – most of our configurations for both air and liquid attempt to utilize the fans. Some remove them, for academic reasons, but most keep the units mounted.

Our standard test bench will be listed below, but note that we are using the EVGA CLC 240 liquid cooler for radiator placement tests, rather than the MSI air cooler. The tests maximize the pump and fan RPMs, as we care only about the peak-to-peak delta in performance, not the noise levels. Noise levels are about 50-55dBA, roughly speaking, with this setup – not really tenable.

For a recap of our previous Cooler Master H500P results, check our review article and thermal testing section.

Following-up our tear-down of the ASUS ROG Strix Vega 64 graphics card, Buildzoid of Actually Hardcore Overclocking now visits the PCB for an in-depth VRM & PCB analysis. The big question was whether ASUS could reasonably outdo AMD's reference design, which is shockingly good for a card with such a bad cooler. "Reasonably," in this sentence, means "within reasonable cost" -- there's not much price-to-performance headroom with Vega, so any custom cards will have to keep MSRP as low as possible while still iterating on the cooler.

The PCB & VRM analysis is below, but we're still on hold for performance testing. As of right now, we are waiting on ASUS to finalize its VBIOS for best compatibility with AMD's drivers. It seems that there is some more discussion between AIB partners and AMD for this generation, which is introducing a bit of latency on launches. For now, here's the PCB analysis -- timestamps are on the left-side of the video:

We’re testing gaming while streaming on the R5 1500X & i5-8400 today, both CPUs that cost the same (MSRP is about $190) and appeal to similar markets. Difficulties stemming from stream benchmarking make it functionally impossible to standardize. CPU changes drastically impact performance during our streaming + gaming benchmarks, which means that each CPU test falls closer to a head-to-head than an overall benchmark. Moving between R5s and R7s, for instance, completely changes the settings required to produce a playable game + stream experience – and that’s good. That’s what we want. The fact that settings have to be tuned nearly on a per-tier basis means that we’re min-maxing what the CPUs can give us, and that’s what a user would do. Creating what is effectively a synthetic test is useful for outright component comparison, but loses resolution as a viable test candidate.

The trouble comes with lowering the bar: As lower-end CPUs are accommodated and tested for, higher-end components perform at lower-than-maximum throughput, but are capped in benchmark measurements. It is impossible, for example, to encode greater than 100% of frames to stream. That will always be a limitation. At this point, you either declare the CPU as functional for that type of encoding, or you constrict performance with heavier duty encoding workloads.

H264 ranges from Ultrafast to Slowest settings, with facets in between identified as Superfast, Veryfast, Faster, Fast, and Medium. As encoding speed approaches the Slow settings, quality enters into “placebo” territory. Quality at some point becomes indistinguishable from faster encoding settings, despite significantly more strain placed on the processor. The goal of the streamer is to achieve a constant framerate output – whether that’s 30FPS or 60FPS – while also maintaining a playable player-side framerate. We test both halves of the equation in our streaming benchmarks, looking at encode output and player output with equal discernment.

This content piece aims to explain how Turbo Boost works on Intel’s i7-8700K, 8600K, and other Coffee Lake CPUs. This primarily sets forth to highlight what “Multi-Core Enhancement” is, and why you may want to leave it off when using a CPU without overclocking.

Multi-core “enhancement” options are either enabled, disabled, or “auto” in motherboard BIOS, where “auto” has somewhat nebulous behavior, depending on board maker. Enabling multi-core enhancement means that the CPU ignores the Intel spec, instead locking all-core Turbo to the single-core Turbo speeds, which means a few things: (1) Higher voltage is now necessary, and therefore higher power draw and heat; (2) instability can be introduced to the system, as we observed in Blender on the ASUS Maximus X Hero with multi-core enhancement on the 8700K; (3) performance is bolstered in-step with higher all-core Turbo.

We’re winding-down coverage of Vega, at this point, but we’ve got a couple more curiosities to explore. This content piece looks at a mix of clock scalability for Vega across a few key clocks (for core and HBM2), and hopes to constrain for a CU difference, to some extent. We obviously can’t fully control down to the shader level (as CUs carry more than just shaders), but we can get close to it. Note that the video content does generally refer to the V56 & V64 difference as one of shaders, but more than shaders are contained in the CUs.

In our initial AMD shader comparison between Vega 56 and Vega 64, we saw nearly identical performance between the cards when clock-matched to roughly 1580~1590MHz core and 945MHz HBM2. We’re now exploring performance across a range of frequency settings, from ~1400MHz core to ~1660MHz core, and from 800MHz HBM2 to ~1050MHz HBM2.

This content piece was originally written and filmed about ten days ago, ready to go live, but we then decided to put a hold on the content and update it. Against initial plans, we ended up flashing V64 VBIOS onto the V56 to give us more voltage headroom for HBM2 clocks, allowing us to get up to 1020MHz easily on V56. There might be room in there for a bit more of an OC, but 1020MHz proved stable on both our V64 and V56 cards, making it easy to test the two comparatively.

We’ve talked about this in the past, but it’s worth reviving: The reason or keeping motherboard consistency during CPU testing is the inherent variance, particularly when running auto settings. Auto voltage depends on a lookup table that’s built on a per-EFI basis for the motherboards, which means auto VIDs vary between not only motherboard vendors, but between EFI revisions. As voltage changes, power consumption changes – the two are directly related – and so too the wattage changes. As a function of volts and amps, watts consumed by the CPU will increase on motherboards that push more volts to the CPU, regardless of whether the CPU needs that voltage to be stable.

We previously found that Gigabyte’s Gaming 7 Z270 motherboard supplied way too much voltage to the 7700K when in auto settings, something that the company later resolved. The resolution was good enough that we now use the Gaming 7 Z270 for all of our GPU tests, following the fix of auto voltages that were too high.

Today, we’re looking at the impact of motherboards on Intel i9-7960X thermals primarily, though the 7980XE makes some appearances in our liquid metal testing. Unless otherwise noted, a Kraken X62 was used at max fan + pump RPMs.

Running through the entire Skylake X lineup with TIM vs. liquid metal benchmarking means we’ve picked-up some very product-specific experience. Skylake X has a unique substrate composition wherein the upper substrate houses the silicon and some SMDs, with the lower substrate hosting the pads and some traces. This makes delidding unique as well, made easier with Der8auer’s Delide DieMate X (available in the US soon). This tutorial shows how to delid Intel Skylake X CPUs using the DieMate X, then how to apply liquid metal. We won't be covering re-sealing today.

Still, given the $1000-$2000 cost with these CPUs, an error is an expensive one. We’ve put together a tutorial on the delid and liquid metal application process.

Disclaimer: This is done entirely at your own risk. You assume all responsibility for any damage done to CPUs. We will do our best to detail this process so that you can safely follow our steps, and following carefully will minimize risk. Ultimately, the risk exists primarily in (1) applying too much force or failing to level the CPU, both easily solved, or (2) applying liquid metal in a way that shorts components.

There are many reasons that Intel may have opted for TIM with their CPUs, and given that the company hasn’t offered a statement of substance, we really have no exact idea of why different materials are selected. Using TIM could be a matter of cost – as seems to be the default assumption – and spend, it could be an undisclosed engineering challenge to do with yields (with solder), it could be for government or legal grants pertaining to environmental conscientiousness, or related to conflict-free advertisements, or any number of other things. We don’t know. What we do know, and what we can test, is the efficacy of the TIM as opposed to alternatives. Intel’s statement pertaining to usage of TIM on HEDT (or any) CPUs effectively paraphrases as “as this relates to manufacturing process, we do not discuss it.” Intel sees this as a proprietary process, and so the subject matter is sensitive to share.

With an i7-7700K, TIM is perhaps more defensible – it’s certainly cheaper, and that’s a cheaper part. Once we start looking at the 7900X and other CPUs of a similar class, the ability to argue in favor of Dow Corning’s TIM weakens. To the credit of both Intel and Dow Corning, the TIM selected is highly durable to thermal cycling – it’ll last a long time, won’t need replacement, and shouldn’t exhibit any serious cracking or aging issues in any meaningful amount of time. The usable life of the platform will expire prior to the CPU’s operability, in essence.

But that doesn’t mean there aren’t better solutions. Intel has used solder before – there’s precedent for it – and certainly there exist thermal solutions with greater transfer capabilities than what’s used on most of Intel’s CPUs.

Page 1 of 17

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge