No reference card has impressed us this generation, insofar as usage by the enthusiast market. Primary complaints have consisted of thermal limitations or excessive heat generation, despite reasonable use cases with SIs and mITX form factor deployments. For our core audience, though, it's made more sense to recommend AIB partner models for superior cooling, pre-overclocks, and (normally) lower prices.
But that's not always the case – sometimes, as with today's review unit, the price climbs. This new child of Corsair and MSI carries on the Hydro GFX and Seahawk branding, respectively, and is posted at ~$750. The card is the construct of a partnership between the two companies, with MSI providing access to the GP104-400 chip and a reference board (FE PCB), and Corsair providing an H55 CLC and SP120L radiator fan. The companies sell their cards separately, but are selling the same product; MSI calls this the “Seahawk GTX 1080 ($750),” and Corsair sells only on its webstore as the “Hydro GFX GTX 1080.” The combination is one we first looked at with the Seahawk 980 Ti vs. the EVGA 980 Ti Hybrid, and we'll be making the EVGA FTW Hybrid vs. Hydro GFX 1080 comparison in the next few days.
For now, we're reviewing the Corsair Hydro GFX GTX 1080 liquid-cooled GPU for thermal performance, endurance throttles, noise, power, FPS, and overclocking potential. We will primarily refer to the card as the Hydro GFX, as Corsair is the company responsible for providing the loaner review sample. Know that it is the same as the Seahawk.
NVIDIA GeForce GTX 1060 Specs vs. GTX 1070, GTX 1080, GTX 960
|NVIDIA Pascal vs. Maxwell Specs Comparison|
|GTX 1080||GTX 1070||GTX 1060||GTX 980 Ti||GTX 980||GTX 960|
|GPU||GP104-400 Pascal||GP104-200 Pascal||GP106 Pascal||GM200 Maxwell||GM204 Maxwell||GM204|
|Fab Process||16nm FinFET||16nm FinFET||16nm FinFET||28nm||28nm||28nm|
|Memory Capacity||8GB||8GB||6GB||6GB||4GB||2GB, 4GB|
|Memory Clock||10Gbps GDDR5X||4006MHz||8Gbps||7Gbps GDDR5||7Gbps GDDR5||7Gbps|
|Power Connectors||1x 8-pin||1x 8-pin||1x 6-pin||1x 8-pin
|2x 6-pin||1x 6-pin|
|Release Price||Reference: $700
The specs for all GTX 1080s are the same, as one might expect, with variance only emerging via the cooling solution and pre-overclock values. AIB partners further differentiate themselves with software solutions, MSI and Corsair sticking with the former's Afterburner, and with warranty plans.
The Hydro GFX GTX 1080, aka Seahawk, pre-overclocks to a boosted 1847MHz in the “OC mode” that's pre-applied to our review sample. In “gaming mode,” the card boosts to 1822MHz, and “silent mode” boosts to 1733MHz – the same as the stock GTX 1080 with nVidia specs.
Corsair and MSI mostly stick to the stock memory clock for the Hydro GFX, about 10Gbps (or 10GHz) for the reference design. OC Mode pushes us to 10.1GHz (10108MHz) effective memory clock, which is distilled down to the actual clock by dividing by 2 for DDR and again by 2 for GDDR5.
Corsair and EVGA use a single 8-pin header, more than sufficient for the GTX 1080 GP104-400 GPU. MSI previously told us that they use a custom VBIOS with a marginally increased VDDC limit from the FE VBIOS, but we've yet to see any reasonable OC gain from such changes. VBIOS is thoroughly locked-down this generation, leaving little room for AIB partners to negotiate advantages.
Tearing the Hydro GFX will teach us more about what to expect:
Corsair GTX 1080 Hydro GFX Tear-Down
Taking apart the Corsair Hydro GFX GTX 1080 (or “MSI Seahawk 1080,” same thing) reveals that the PCB is the very same used for the reference cards by nVidia. We first showed the Founders Edition PCB in our GTX 1080 Hybrid build log, detailing lightly its 5+1 phase power design for the VRM, 8Gb Micron modules (for 8GB GDDR5X), and down-costed absentees on the board. The VRM, for instance, could add one full set of caps, FETs, and an inductor for an additional phase, but that PCB of both the FE and Hydro GFX (by extension) leaves this blank. That leaves us with the 5+1 setup, for which the initial overclocking is shown in our GTX 1080 review. We'll get to the Hydro GFX overclock momentarily.
The above video details our tear-down of the Corsair Hydro GFX / MSI Seahawk GTX 1080. Thankfully, unlike the hellish-to-disassembly Founders Edition, the Corsair and MSI amalgam sticks entirely to Phillips head screws, mostly of the A1 size. There are 8 Phillips screws securing the somewhat flimsy backplate (it's clear that this one is mostly for looks, though some structural support is provided), then 4 screws securing the pump block to the PCB. 6 Phillips screws and 2 hex screws (for DVI) secure the expansion plate to the card, with another set of 6 Phillips screws securing the shroud to the baseplate.
The baseplate is used to sink heat from the VRAM and VRM, which then sees dissipation from the blower fan. The baseplate isn't finned, but GDDR5X doesn't generate a ton of heat and the card's OC potential is fairly limited, so the baseplate + blower solution is sufficient. We'll show this later.
With the baseplate removed, the GP104-400 (rev A1) GPU is revealed, along with the expected 8x 8Gb VRAM modules, 5+1 VRM, and the rest of the PCB. A splitter cable merges the pump power and fan power into the PCB's PWM fan header. The blower fan is modulated by thermal demand, as is normal.
The Corsair H55 CLC sees deployment in the Seahawk/Hydro GFX. From quick measurements and visual inspection, it appears as if this variant of the H55 uses a flat coldplate for its cooling solution -- an improvement over the curvature found in most CPU liquid coolers. A slight concave bow is useful for dissipating heat across an IHS -- a curved surface -- and for dealing with unique CPU hotspots. These hotspots don't exist on a GPU, though; at least, not the same hotspots. A GPU is a flat piece of silicon atop a substrate, with no IHS between the GPU and its cooling solution. Making direct, perfectly flat contact will immediately improve performance over a run-of-the-mill CPU CLC, which may bow in a way that prevents full, flat contact to the surface.
Tension of the solution also matters, and imperfect mounting pressure can skew thermals negatively. The Corsair unit appears to just barely make acceptable contact with the GPU silicon, and seems to be corrected for by additional torque on the screws. The company has slotted in some o-rings between the CLC standoffs and the PCB to prevent cosmetic damage, it appears, so steps have been taken to account for this. The close contact is a result of the tall baseplate, which exceeds the z-height of the silicon, and so would require a copper protrusion for full contact or another solution -- like additional torque on the screws. We ran into this same issue when building our GTX 1060 Hybrid card, and solved it in a less-than-elegant way: By filing down the base plate.
Compared to the GTX 1080 FE, the Corsair Hydro GFX is trivial to disassemble – and that's a good thing. It'd be easy to get in there and make changes or fixes, if necessary.
Reminder on Pascal Architecture & GTX 1080 Thermal/Power Design
To learn about clock gating, power savings, Boost 3.0, and Pascal architecture (including discussion on the block diagram, datapath organization, etc.), read these posts:
- Pascal GP100 Architecture Deep-Dive
- GTX 1080 Review & Architecture
- GTX 1070 Review & Architecture
- GTX 1060 Review & Architecture
Our latest AMD RX 460 review explains Polaris 10 & 11 in great detail, if that also interests you. We'd summarily recommend visiting our Titan X GN Hybrid Results content & GTX 1080 GN Hybrid results content for further discussion on Pascal functionality under varied thermal scenarios.
Here's a quick quote of some relevant pieces, pasted from our GTX 1080 review:
“GP104 is a gaming-grade GPU with no real focus on some of the more scientific applications of GP100. GP104 (and its host GTX 1080) is outfitted with 7.2B transistors, a marked growth over the GTX 980's 5.2B transistors (though fewer than the GTX 980 Ti's 8B – but transistor count doesn't mean much as a standalone metric; architecture matters).
GP104 hosts 20SMs. SM architecture is familiar in some ways to GM204: An instruction cache is shared between two effective “partitions,” with each of those owning dedicated instruction buffers (one each), warp schedulers (one each), and dispatch units (two each). The register file is sized at 16,384 x 32-bit, one per “partition” of the SM. The new PolyMorph Engine 4.0 sits on top of this, but we'll talk about that more within the simultaneous multi-projection section.
Each GP104 SM hosts 128 CUDA cores – a stark contrast from the layout of the GP100, which accommodates for FP64 and FP16 where GP104 does not (because neither is particularly useful for gamers). In total, the 20 SMs and 128 core-per-SM count spits out a total of 2560 CUDA cores. GP104 contains 20 geometry units, 64 ROPs (depicted as horizontally flanking the L2 Cache), and 160 TMUs (20 SMs * 8 TMUs = 160 TMUs).
Further, SMs each possess 256KB of register file capacity, 1x 96KB shared memory unit, 48KB of L1 Cache, and the 8 TMUs we already discussed. There are four total dedicated raster engines on Pascal GP104 (one per GPC). There are 8 Special Function Units (SFUs) per SM partition – 16 total per SM – and 8 Load/Store (LD/ST) units per SM partition. SFUs are utilized for low-level execution of mathematical instruction, e.g. trigonometric sin/cos math. LD/ST units transact data between cache and DRAM.
The new PolyMorph Engine 4.0 (PME - originating on Fermi) has been updated to support a new Simultaneous Multi-Projection (SMP) function, which we'll explain in greater depth below. On an architecture level, each TPC contains one SM and one PolyMorph Engine (10 total PolyMorph engines); drilling down further, each PME contains a unit specifically dedicated to SMP tasks."
Learn more about Asynchronous Compute, Preemptive Compute, and GP104 architecture here.
Reminder on GN's Previous GTX 1080 DIY “Hybrid” Build
As a reminder, we previously converted the 1080 FE into a liquid-cooled card – months ahead of availability of liquid-cooled GTX 1080s – and that content is available here:
Continue to page 2 for GPU testing methodology.