Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.

This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.

Here's part 1:

GPU diode is a bad means for controlling fan RPM, at this point; it’s not an indicator of total board performance by any stretch of use. GPUs have become efficient enough that GPU-governed PWM for fans means lower RPMs, which means less noise – a good thing – but also worsened performance on the still-hot VRMs. We have been talking about this for a while now, most recently in our in-depth EVGA VRM analysis during the Great Thermal Pad Fracas of 2016. That analysis showed that the thermals were largely a non-issue, but not totally inexcusable. EVGA’s subsequent VBIOS update and thermal pad mods were sufficient to resolve any concern that lingered, though if you’re curious to learn more about that, it’s really worth just checking out the original post.

VBIOS updates and thermal pad mods were not EVGA’s only response to this. Internally, the company set forth to design a new PCB+cooler combination that would better detect high heat operation on non-GPU components, and would further protect said components with a 10A fuse.

In our testing today, we’ll be fully analyzing the efficacy of EVGA’s new “ICX” cooler design, to coexist with the long-standing ACX cooler. In our thermal analysis and review of the EVGA GTX 1080 FTW2 (~$630) & SC2 ICX cards (~$590), we’ll compare ACX vs. ICX coolers on the same card, MOSFET & VRAM temperatures with thermocouples and NTC thermistors, and individual cooler component performance. This includes analysis down to the impact the new backplate makes, among other tests.

Of note: There will be no FPS benchmarks for this review. All ICX cards with SC2 and FTW2 suffixes ship at the exact same base/boost clock-rates as their preceding SC & FTW counterparts. This means that FPS will only be governed by GPU Boost 3.0; that is to say, any FPS difference seen between an EVGA GTX 1080 FTW & EVGA GTX 1080 FTW2 will be entirely resultant of uncontrollable (in test) manufacturing differences at the GPU-level. Such differences will be within a percentage point or two, and are, again, not a result of the ICX cooler. Our efforts are therefore better spent on the only thing that matters with this redesign: Cooling performance and noise. Gaming performance remains the same, barring any thermal throttle scenarios – and those aren’t a concern here, as you’ll see.

EVGA’s CLC 120 cooler fell on our bench shortly after the EVGA CLC 280 ($130), which we reviewed last week against the NZXT X62 & Corsair H115i. The EVGA CLC 120 prices itself at $90, making it competitive with other RGB-illuminated coolers, but perhaps a bit steep in comparison to the cheaper 120mm AIOs on the market. Regardless, 120mm territory is where air coolers start to claw back their value in performance-to-dollar; EVGA’s chosen a tough market to debut a low-end cooler, despite the exceptionally strong positioning of their CLC 280 (as stated in our review).

Before diving in to this review, you may want to read the EVGA CLC 280 review, NZXT Kraken X42/X52/X62 review, or its subsequent tear-down.

EVGA’s closed-loop liquid cooler, named “Closed-Loop Liquid Cooler,” will begin shipping this month in 280mm and 120mm variants. We’ve fully benchmarked the new EVGA CLC 280mm versus NZXT’s Kraken X62 & Corsair’s H115iV2 280mm coolers, including temperature and noise testing. The EVGA CLC 280, like both of these primary competitors, is built atop Asetek’s Gen5 pump technology and primarily differentiates itself in the usual ways: Fan design and pump plate/LED design. We first discussed the new EVGA CLCs at CES last month (where we also detailed the new ICX coolers), including some early criticism of the software’s functionality, but EVGA made several improvements prior to our receipt of the review product.

The EVGA CLC 280 enters the market at $130 MSRP, partnered with the EVGA CLC 120 at $90 MSRP. For frame of reference, the competing-sized NZXT Kraken X62 is priced at ~$160, with the Corsair H115i priced at ~$120. Note that we also have A/B cowling tests toward the bottom for performance analysis of the unique fan design.

Relatedly, we would strongly recommend reading our Kraken X42, X52, & X62 review for further background on the competition. 

EVGA’s CES 2017 suite hosted a new set of 10-series GPUs with “ICX” coolers, an effort to rebuff the cooling capabilities of EVGA’s troubled ACX series. The ACX and ICX coolers will coexist (for now, at least), with each SKU taking slightly different price positioning in the market. Although EVGA wouldn’t give us any useful details about the specifications of the ICX cooler, we were able to figure most of it out through observation of the physical product.

For the most part, the ICX cooler has the same ID – the front of the card is nigh-identical to the front of the ACX cards, the LED placement and functionality is the same, the form factor is effectively the same. That’s not special. What’s changed is the cooling mechanisms. Major changes include EVGA’s fundamentally revamped focus of what devices are being cooled on the board. As we’ve demonstrated time and again, the GPU should no longer be the focal point of cooling solutions. Today, with Pascal’s reduced operating voltage (and therefore, temperature), VRMs and VRAM are running at more significant temperatures. Most of the last-gen of GPU cooling solutions don’t put much focus on non-GPU device cooling, and the GPU cores are now efficient enough to demand cooling efforts be diverted to FETs, capacitor banks, and potentially VRAM (though that is less important).

Two EVGA GTX 1080 FTW cards have now been run through a few dozen hours of testing, each passing through real-world, synthetic, and torture testing. We've been following this story since its onset, initially validating preliminary thermal results with thermal imaging, but later stating that we wanted to follow-up with direct thermocouple probes to the MOSFETs and PCB. The goal with which we set forth was to create the end-all, be-all set of test data for VRM thermals. We have tested every reasonable scenario for these cards, including SLI, and have even intentionally attempted to incinerate the cards by running ridiculous use scenarios.

Thermocouples were attached directly to the back-side of the PCB (hotspot previously discovered), the opposing MOSFET (#2, from bottom-up), and MOSFET #7. The seventh and second MOSFETs are those which seem to be most commonly singed or scorched in user photos of allegedly failed EVGA 10-series ACX 3.0 cards, including the GTX 1060 and GTX 1070. Our direct probe contact to these MOSFETs will provide more finality to testing results, with significantly greater accuracy and understanding than can be achieved with a thermal imager pointed at the rear-side of the PCB. Even just testing with a backplate isn't really ideal with thermal cameras, as the emissivity of the metal begins to make for questionable results -- not to mention the fact that the plate visually obstructs the actual components. And, although we did mirror EVGA & Tom's DE's testing methodology when checking the impact of thermal pads on the cards, even this approach is not perfect (it does turn out that we were pretty damn accurate, though, but it's not perfect. More on that later.). The pads act as an insulator, again hiding the components and assisting in the spread of heat across a larger surface area. That's what they're designed to do, of course, but for a true reading, we needed today's tests.

We're working on finalizing our validation of the EVGA VRM concerns that arose recently, addressed by the company with the introduction of a new VBIOS and optional thermal pad solution. We tested each of these updates in our previous content piece, showing a marked improvement from the more aggressive fan speed curve.

Now, that stated, we still wanted to dig deeper. Our initial testing did apply one thermocouple to the VRM area of the video card, but we weren't satisfied with the application of that probe. It was enough to validate our imaging results, which were built around validating Tom's Hardware DE's results, but we needed to isolate a few variables to learn more about EVGA's VRM.

This tutorial walks through the process of installing EVGA's thermal pad mod kit on GTX 1080 FTW, 1070 FTW, and non-FTW cards of similar PCB design. Our first article on EVGA's MOSFET and VRM temperatures can be found here, but we more recently posted thermographic imaging and testing data pertaining to EVGA's solution to its VRM problems. If you're out of the loop, start with that content, then come back here for a tutorial on applying EVGA's fix.

The thermal mod kit from EVGA includes two thermal pads, for which we have specified the dimensions below (width/height), a tube of thermal compound, and some instructions. That kit is provided free to affected EVGA customers, but you could also buy your own thermal pads (~$7) of comparable size if EVGA cannot fulfill a request.

We received a shipment of EVGA GTX 1080 FTW cards today and immediately deployed them in our test bench. The cards have undergone about 8 hours of burn-in on the 1080 FTW without thermal pads so far, though we've also got the 1080 FTW with thermal pads for additional testing. In the process of testing this hardware, GamersNexus received a call from EVGA with pertinent updates to the company's VRM temperature solution: The company will now be addressing its VRM heat issues with a BIOS update in addition to the optional thermal pads replacement. We have briefly tested each solution. Our finalized testing will be online within a few days, once we've had more time to burn-in the cards, but we've got initial thermographic imaging and decibel level tests for now.

EVGA's BIOS update will, as we understand it, only modify the fan speed curve so that it is more aggressive. There should not be additional changes to the BIOS beyond this, it seems. Presently, the GTX 1080 FTW tends to max its fans at around ~1600RPM when under load (maxes at around ~1700RPM). This results in a completely acceptable GPU diode reading of roughly 72C (or ~50C delta T over ambient), but doesn't allow for VRM cooling given the lack of thermal interface between the PCB back-side and the backplate. The new fan speed will curve to hit 2200RPM, or a jump to ~80% in Afterburner/Precision from the original ~60% (max ~65%). We've performed initial dB testing to look at the change in noise output versus the fan RPM. Our thermal images also look at the EVGA GTX 1080 FTW with its backplate removed (a stock model) at the original fan RPM and our manually imposed 2200RPM fan speed.

EVGA has been facing thermal issues with its ACX series coolers, as pointed out by Tom's Hardware - Germany earlier this week. We originally thought these issues to be borderline acceptable, since Tom's was reporting maximum VRM temperatures of ~107-114C. These temperatures would still allow EVGA's over-spec VRM to function, granted its 350A abilities, as that'd still land the output around 200A to the GPU. A GTX 1080 will pull somewhere around 180A without an extreme overclock, so that was borderline, but not catastrophic.

Unfortunately for EVGA, temperature increases to the VRM have nearly exponential increases in damage. Hitting a temperature greater than 125C on the VRM with EVGA's design could result in MOSFET failure, effectively triggered by a runaway thermal scenario where the casing is blown, and OCP/OTP might not be enough to prevent the destruction of a FET or two. The VRM derates and loses efficiency at this point, and would be incapable of sustaining the amperage demanded by higher power draw Pascal chips.

Page 1 of 5

  VigLink badge