There are many reasons that Intel may have opted for TIM with their CPUs, and given that the company hasn’t offered a statement of substance, we really have no exact idea of why different materials are selected. Using TIM could be a matter of cost – as seems to be the default assumption – and spend, it could be an undisclosed engineering challenge to do with yields (with solder), it could be for government or legal grants pertaining to environmental conscientiousness, or related to conflict-free advertisements, or any number of other things. We don’t know. What we do know, and what we can test, is the efficacy of the TIM as opposed to alternatives. Intel’s statement pertaining to usage of TIM on HEDT (or any) CPUs effectively paraphrases as “as this relates to manufacturing process, we do not discuss it.” Intel sees this as a proprietary process, and so the subject matter is sensitive to share.

With an i7-7700K, TIM is perhaps more defensible – it’s certainly cheaper, and that’s a cheaper part. Once we start looking at the 7900X and other CPUs of a similar class, the ability to argue in favor of Dow Corning’s TIM weakens. To the credit of both Intel and Dow Corning, the TIM selected is highly durable to thermal cycling – it’ll last a long time, won’t need replacement, and shouldn’t exhibit any serious cracking or aging issues in any meaningful amount of time. The usable life of the platform will expire prior to the CPU’s operability, in essence.

But that doesn’t mean there aren’t better solutions. Intel has used solder before – there’s precedent for it – and certainly there exist thermal solutions with greater transfer capabilities than what’s used on most of Intel’s CPUs.

Our hardware news round-up for the past week is live, detailing some behind-the-scenes / early information on our thermal and power testing for the i9-7900X, the Xbox One X hardware specs, Threadripper's release date, and plenty of other news. Additional coverage includes final word on Acer's 21X Predator, Samsung's 64-layer NAND finalization, Global Foundries' 7nm FinFET for 2018, and some extras.

We anticipate a slower news week for non-Intel/non-AMD entities this week, as Intel launched X299/SKY-X and AMD is making waves with Epyc. Given the command both these companies have over consumer news, it's likely that other vendors will hold further press releases until next week.

Find the show notes below, written by Eric Hamilton, along with the embedded video.

This 47th episode of Ask GN features questions pertaining to test execution and planning for multitasking benchmarks, GPU binning, Ryzen CPU binning, X300 mITX board availability, and more. We provide some insights as to plans for near-future content, like our impending i7-2600K revisit, and quote a few industry sources who answered questions in this week's episode.

Of note, we worked with VSG of Thermal Bench to talk heatpipe size vs. heatpipe count, then spoke with EVGA and ASUS about their GPU allocation and pretesting processes (popularly, "binning," though not quite the same). Find the new episode below, with timestamps to follow the embed:

The playful Nintendo noises emitted from our Switch came as somewhat of a surprise following an extensive tear-down and re-assembly process. Alas, the console does still work, and we left behind breadcrumbs of our dissection within the body of the Switch: a pair of thermocouples mounted to the top-center of the SOC package and one memory package. We can’t get software-level diode readings of the SOC’s internal sensors, particularly given the locked-down nature of a console like Nintendo’s, and so thermal probes allow us the best insight as to the console’s temperature performance. As a general rule, thermal performance is hard to keep in perspective without a comparative metric, so we need something else. That’ll be noise, for this one; we’re testing dBA output of the fan versus an effective tCase on the SOC to determine how the fan ramps.

There’s no good way to measure the Switch’s GPU frequency without hooking up equipment we don’t have, so we won’t be able to plot a frequency versus temperature/time chart. Instead, we’re looking at temperature versus noise, then using ad-hoc testing to observationally determine framerate response to various logged temperatures. Until a point at which we’ve developed tools for monitoring console FPS externally, this is the best combination of procedures we can muster.

GPU diode is a bad means for controlling fan RPM, at this point; it’s not an indicator of total board performance by any stretch of use. GPUs have become efficient enough that GPU-governed PWM for fans means lower RPMs, which means less noise – a good thing – but also worsened performance on the still-hot VRMs. We have been talking about this for a while now, most recently in our in-depth EVGA VRM analysis during the Great Thermal Pad Fracas of 2016. That analysis showed that the thermals were largely a non-issue, but not totally inexcusable. EVGA’s subsequent VBIOS update and thermal pad mods were sufficient to resolve any concern that lingered, though if you’re curious to learn more about that, it’s really worth just checking out the original post.

VBIOS updates and thermal pad mods were not EVGA’s only response to this. Internally, the company set forth to design a new PCB+cooler combination that would better detect high heat operation on non-GPU components, and would further protect said components with a 10A fuse.

In our testing today, we’ll be fully analyzing the efficacy of EVGA’s new “ICX” cooler design, to coexist with the long-standing ACX cooler. In our thermal analysis and review of the EVGA GTX 1080 FTW2 (~$630) & SC2 ICX cards (~$590), we’ll compare ACX vs. ICX coolers on the same card, MOSFET & VRAM temperatures with thermocouples and NTC thermistors, and individual cooler component performance. This includes analysis down to the impact the new backplate makes, among other tests.

Of note: There will be no FPS benchmarks for this review. All ICX cards with SC2 and FTW2 suffixes ship at the exact same base/boost clock-rates as their preceding SC & FTW counterparts. This means that FPS will only be governed by GPU Boost 3.0; that is to say, any FPS difference seen between an EVGA GTX 1080 FTW & EVGA GTX 1080 FTW2 will be entirely resultant of uncontrollable (in test) manufacturing differences at the GPU-level. Such differences will be within a percentage point or two, and are, again, not a result of the ICX cooler. Our efforts are therefore better spent on the only thing that matters with this redesign: Cooling performance and noise. Gaming performance remains the same, barring any thermal throttle scenarios – and those aren’t a concern here, as you’ll see.

Every now and then, a new marketing gimmick comes along that feels a little untested. MSI’s latest M.2 heat shield always struck us as high on the list of potentially untested marketing claims. The idea that the “shield” can perform two opposing functions – shielding an SSD from external heat while somehow simultaneously sinking heat from within – seems like it’s written by marketing, not by engineering.

From a “shielding” standpoint, it might make sense; if you’ve got a second video card socketed above the M.2 SSD and dumping heat onto it, a shield could in fact help keep heat from touching SMT components. This would include Flash modules and controllers that may otherwise be in a direct heat path. From a heat sinking standpoint, a separate M.2 heatsink would also make sense. M.2 SSDs are notoriously hot resultant of their lower surface area and general lack of housing (ignoring the M8Pe and similar devices), and running high temperatures in a case with unfavorable ambient will result in throttled performance. MSI thought that adding this “shield” to the M.2 slot would solve the issue of hot M.2 SSDs, but it’s got a few problems that don’t even require testing to understand: (1) the “shield” (or sink, whatever) doesn’t enshroud the underside of the M.2 device, where SMDs will likely be present; (2) the cover is designed more like a shield than a sink (despite MSI’s marketing language – see below), and that means we’ve got limited surface area with zero dissipation potential.

Thermal cameras have proliferated to a point that people are buying them for use as tech toys, made possible thanks to new prices nearer $200 than the multi-thousand thermal imaging cameras that have long been the norm. Using a thermal camera that connects to a mobile phone eliminates a lot of the cost for such a device, relying on the mobile device’s hardware for post-processing and image cleanup that make the cameras semi-useful. They’re not the most accurate and should never be trusted over a dedicated, proper thermal imaging device, but they’re accurate enough for spot-checking and rapid concepting of testing procedures.

Unfortunately, we’ve seen them used lately as hard data for thermal performance of PC hardware. For all kinds of reasons, this needs to be done with caution. We urged in our EVGA VRM coverage that thermal imaging was not perfect for the task, and later stuck thermal probes directly to the card for more accurate measurements. Even ignoring the factors of emission, transmission, and reflection (today’s topics), using thermal imaging to take temperature measurements of core component temperatures is methodologically flawed. Measuring the case temperature of a laptop or chassis tells us nothing more than that – the temperature of the surface materials, assuming an ideal black body with an emissivity close to 1.0. We’ll talk about that contingency momentarily.

But even so: Pointing a thermal imager at a perfectly black surface and measuring its temperature is telling us the temperature of the surface. Sure, that’s useful for a few things; in laptops, that could be determining if case temperature exceeds the skin temp specification of a particular manufacturer. This is good for validating whether a device might be safe to touch, or for proving that a device is too hot for actual on-lap use. We could also use this information as troubleshooting to help us determine where hotspots are under the hood, potentially useful in very specific cases.

That doesn’t, however, tell us the efficacy of the cooling solution within the computer. For that, we need software to measure the CPU core temperatures, the GPU diode, and potentially other components (PCH and HDD/SSD are less popular, but occasionally important). Further analysis would require direct thermocouple probes mounted to the SMDs of interest, like VRM components or VRAM. Neither of these two examples are equipped with internal sensors that software, and even the host GPU, is capable of reading.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge