EVGA’s CES 2017 suite hosted a new set of 10-series GPUs with “ICX” coolers, an effort to rebuff the cooling capabilities of EVGA’s troubled ACX series. The ACX and ICX coolers will coexist (for now, at least), with each SKU taking slightly different price positioning in the market. Although EVGA wouldn’t give us any useful details about the specifications of the ICX cooler, we were able to figure most of it out through observation of the physical product.

For the most part, the ICX cooler has the same ID – the front of the card is nigh-identical to the front of the ACX cards, the LED placement and functionality is the same, the form factor is effectively the same. That’s not special. What’s changed is the cooling mechanisms. Major changes include EVGA’s fundamentally revamped focus of what devices are being cooled on the board. As we’ve demonstrated time and again, the GPU should no longer be the focal point of cooling solutions. Today, with Pascal’s reduced operating voltage (and therefore, temperature), VRMs and VRAM are running at more significant temperatures. Most of the last-gen of GPU cooling solutions don’t put much focus on non-GPU device cooling, and the GPU cores are now efficient enough to demand cooling efforts be diverted to FETs, capacitor banks, and potentially VRAM (though that is less important).

Two EVGA GTX 1080 FTW cards have now been run through a few dozen hours of testing, each passing through real-world, synthetic, and torture testing. We've been following this story since its onset, initially validating preliminary thermal results with thermal imaging, but later stating that we wanted to follow-up with direct thermocouple probes to the MOSFETs and PCB. The goal with which we set forth was to create the end-all, be-all set of test data for VRM thermals. We have tested every reasonable scenario for these cards, including SLI, and have even intentionally attempted to incinerate the cards by running ridiculous use scenarios.

Thermocouples were attached directly to the back-side of the PCB (hotspot previously discovered), the opposing MOSFET (#2, from bottom-up), and MOSFET #7. The seventh and second MOSFETs are those which seem to be most commonly singed or scorched in user photos of allegedly failed EVGA 10-series ACX 3.0 cards, including the GTX 1060 and GTX 1070. Our direct probe contact to these MOSFETs will provide more finality to testing results, with significantly greater accuracy and understanding than can be achieved with a thermal imager pointed at the rear-side of the PCB. Even just testing with a backplate isn't really ideal with thermal cameras, as the emissivity of the metal begins to make for questionable results -- not to mention the fact that the plate visually obstructs the actual components. And, although we did mirror EVGA & Tom's DE's testing methodology when checking the impact of thermal pads on the cards, even this approach is not perfect (it does turn out that we were pretty damn accurate, though, but it's not perfect. More on that later.). The pads act as an insulator, again hiding the components and assisting in the spread of heat across a larger surface area. That's what they're designed to do, of course, but for a true reading, we needed today's tests.

We're working on finalizing our validation of the EVGA VRM concerns that arose recently, addressed by the company with the introduction of a new VBIOS and optional thermal pad solution. We tested each of these updates in our previous content piece, showing a marked improvement from the more aggressive fan speed curve.

Now, that stated, we still wanted to dig deeper. Our initial testing did apply one thermocouple to the VRM area of the video card, but we weren't satisfied with the application of that probe. It was enough to validate our imaging results, which were built around validating Tom's Hardware DE's results, but we needed to isolate a few variables to learn more about EVGA's VRM.

This tutorial walks through the process of installing EVGA's thermal pad mod kit on GTX 1080 FTW, 1070 FTW, and non-FTW cards of similar PCB design. Our first article on EVGA's MOSFET and VRM temperatures can be found here, but we more recently posted thermographic imaging and testing data pertaining to EVGA's solution to its VRM problems. If you're out of the loop, start with that content, then come back here for a tutorial on applying EVGA's fix.

The thermal mod kit from EVGA includes two thermal pads, for which we have specified the dimensions below (width/height), a tube of thermal compound, and some instructions. That kit is provided free to affected EVGA customers, but you could also buy your own thermal pads (~$7) of comparable size if EVGA cannot fulfill a request.

We received a shipment of EVGA GTX 1080 FTW cards today and immediately deployed them in our test bench. The cards have undergone about 8 hours of burn-in on the 1080 FTW without thermal pads so far, though we've also got the 1080 FTW with thermal pads for additional testing. In the process of testing this hardware, GamersNexus received a call from EVGA with pertinent updates to the company's VRM temperature solution: The company will now be addressing its VRM heat issues with a BIOS update in addition to the optional thermal pads replacement. We have briefly tested each solution. Our finalized testing will be online within a few days, once we've had more time to burn-in the cards, but we've got initial thermographic imaging and decibel level tests for now.

EVGA's BIOS update will, as we understand it, only modify the fan speed curve so that it is more aggressive. There should not be additional changes to the BIOS beyond this, it seems. Presently, the GTX 1080 FTW tends to max its fans at around ~1600RPM when under load (maxes at around ~1700RPM). This results in a completely acceptable GPU diode reading of roughly 72C (or ~50C delta T over ambient), but doesn't allow for VRM cooling given the lack of thermal interface between the PCB back-side and the backplate. The new fan speed will curve to hit 2200RPM, or a jump to ~80% in Afterburner/Precision from the original ~60% (max ~65%). We've performed initial dB testing to look at the change in noise output versus the fan RPM. Our thermal images also look at the EVGA GTX 1080 FTW with its backplate removed (a stock model) at the original fan RPM and our manually imposed 2200RPM fan speed.

EVGA has been facing thermal issues with its ACX series coolers, as pointed out by Tom's Hardware - Germany earlier this week. We originally thought these issues to be borderline acceptable, since Tom's was reporting maximum VRM temperatures of ~107-114C. These temperatures would still allow EVGA's over-spec VRM to function, granted its 350A abilities, as that'd still land the output around 200A to the GPU. A GTX 1080 will pull somewhere around 180A without an extreme overclock, so that was borderline, but not catastrophic.

Unfortunately for EVGA, temperature increases to the VRM have nearly exponential increases in damage. Hitting a temperature greater than 125C on the VRM with EVGA's design could result in MOSFET failure, effectively triggered by a runaway thermal scenario where the casing is blown, and OCP/OTP might not be enough to prevent the destruction of a FET or two. The VRM derates and loses efficiency at this point, and would be incapable of sustaining the amperage demanded by higher power draw Pascal chips.

We've still got a few content pieces left over from our recent tour of LA-based hardware manufacturers. One of those pieces, filmed with no notice and sort of on a whim, is our tear-down of an EVGA GTX 1080 Classified video card. EVGA's Jacob Freeman had one available and was game to watch a live, no-preparation tear-down of the card on camera.

This is the most meticulously built GTX 1080 we have yet torn to the bones. The card has an intensely over-built VRM with inductors and power stages of high-quality, using doublers to achieve its 14-phase power design (7x2). An additional three phases are set aside for memory, cooled in tandem with the core VRM, GPU, and VRAM by an ACX 3.0 cooler. The PCB and cooler meet through a set of screws, each anchored to an adhesive (preventing direct contact between the screw and PCB – although unnecessary, a nice touch), with the faceplate and accessories mounted via Allen-keyed screws.

It's an exceptionally easy card to disassemble. The unit is rated to draw 245W through the board (30W more than the 215W draw of the GTX 1080 Hybrid), theoretically targeted at high sustained overclocks with its master/slave power target boost. It's not news that Pascal devices seem to cap their maximum frequency all around the 2050-2100MHz range, but there are still merits to an over-built VRM. One of those is greater spread of heat over the area of the cooler, and lower efficiency loss through heat or low-quality phases. With the Classified, it's also a prime target for modification using something like the EK Predator 280 or open loop cooling. Easy disassembly and high performance match well with liquid.

Implementation of liquid coolers on GPUs makes far more sense than on the standard CPU. We've shown in testing that actual performance can improve as a result of a better cooling solution on a GPU, particularly when replacing weak blower fan or reference cooler configurations. With nVidia cards, Boost 3.0 dictates clock-rate based upon a few parameters, one of which is remedied with more efficient GPU cooling solutions. On the AMD side of things, our RX 480 Hybrid mod garnered some additional overclocking headroom (~50MHz), but primarily reduced noise output.

Clock-rate also stabilizes with better cooling solutions (and that includes well-designed air cooling), which helps sustain more consistent frametimes and tighten frame latency. We call these 1% and 0.1% lows, though that presentation of the data is still looking at frametimes at the 99th and 99.9th percentile.

The EVGA GTX 1080 Hybrid has thus far had the most interesting cooling solution we've torn down on an AIO cooled GPU this generation, but Gigabyte's Xtreme Waterforce card threatens to take that title. In this review, we'll benchmark the Gigabyte GTX 1080 Xtreme Water Force card vs. the EVGA 1080 FTW Hybrid and MSI/Corsair 1080 Sea Hawk. Testing is focused on thermals and noise primarily, with FPS and overclocking thrown into the mix.

A quick thanks to viewer and reader Sean for loaning us this card, since Gigabyte doesn't respond to our sample requests.

The GTX 1060 3GB ($200) card's existence is curious. The card was initially rumored to exist prior to the 1060 6GB's official announcement, and was quickly debunked as mythological. Exactly one month later, nVidia did announce a 3GB GTX 1060 variant – but with one fewer SM, reducing the core count by 10%. That drops the GTX 1060 from 1280 CUDA cores to 1152 CUDA cores (128 cores per SM), alongside 8 fewer TMUs. Of course, there's also the memory reduction from 6GB to 3GB.

The rest of the specs, however, remain the same. The clock-rate has the same baseline 1708MHz boost target, the memory speed remains 8Gbps effective, and the GPU itself is still a declared GP106-400 chip (rev A1, for our sample). That makes this most the way toward a GTX 1060 as initially announced, aside from the disabled SM and halved VRAM. Still, nVidia's marketing language declared a 5% performance loss from the 6GB card (despite a 10% reduction in cores), and so we decided to put those claims to the test.

In this benchmark, we'll be reviewing the EVGA GTX 1060 3GB vs. GTX 1060 6GB performance in a clock-for-clock test, with 100% of the focus on FPS. The goal here is not to look at the potential for marginally changed thermals (which hinges more on AIB cooler than anything) or potentially decreased power, but to instead look strictly at the impact on FPS from the GTX 1060 3GB card's changes. In this regard, we're very much answering the “is a 1060 6GB worth it?” question, just in a less SEF fashion. The GTX 1060s will be clocked the same, within normal GPU Boost 3.0 variance, and will only be differentiated in the SM & VRAM count.

For those curious, we previously took this magnifying glass to the RX 480 8GB & 4GB cards, where we pitted the two against one another in a versus. In that scenario, AMD also reduced the memory clock of the 4GB models, but the rest remained the same.

EVGA's Power Link was shown briefly in our Computex coverage, but the unit has received a few updates since then and is closer to finalization. The idea of the Power Link is pretty straight-forward: It's an L-shaped enclosure with power rails that wraps around the right side of the card, and exists solely to manage cables away from the top power inputs. The cables instead connect to the Power Link, on the right side of the card, with the Link tapping into the video card's power more discreetly (under guise of an EVGA-branded “L”).

The new Power Link, shown for the first time at PAX West, has made it possible to shift the power headers connecting to the card so that more device layouts are accommodated. The Link still won't work for power headers that face their clip opposite the reference layout (clip toward the back of the card), but will now work better for cards where PCI-e connections are slightly left/right of where EVGA's might be located. We're told that this Link will fit most cards on the market (reverse clip orientation notwithstanding), and that includes non-EVGA cards.

Page 4 of 7

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge