Liquid-cooling the AMD Vega: Frontier Edition card has proven an educational experience for us, yielding new information about power leakage and solidifying beliefs of a power wall. We also learned that overclocking without thermal barriers (or thermal-induced power barriers) grants significant performance uplift in some scenarios, including gaming and production, though is done at the cost of ~33A from the PSU over 12V PSU power.

Our results for the AMD Vega: Frontier Edition liquid-cooling hybrid mod are in, and this review covers the overclocking scalability, power limits, thermal change, and more.

The Hybrid mod was detailed in build log form over in part 1 of the endeavor. This mod wasn’t as straight-forward as most, seeing as we didn’t have any 64x64mm brackets for securing the liquid cooler to the card. Drilling through an Intel mounting plate for an Asetek cooler, we were ultimately able to get an Asetek 570LC onto the card, which we later equipped with a Gentle Typhoon 120mm fan. VRM FET cooling was handled by aluminum finstacks secured by thermal adhesive, cooled with 1-2x Corsair ML120 fans. That said, this VRM cooling solution also wasn’t necessary – we could have operated with just the fans, and did at one point operate with just the heatsinks (and indirect airflow).

Specs and prices for AMD’s upcoming Ryzen Threadripper CPUs have been announced, as well as a general release date. The 12C/24T 1920X and 16C/32T 1950X will be available worldwide starting in “Early August,” with prebuilt Alienware systems available for preorder starting July 27th. According to AMD:

“Both are unlocked, use the new Socket TR4, have quad-channel DDR4, and feature 64 lanes of PCI Express. Base clock on the Ryzen Threadripper 1950X 16-core product is 3.4 GHz with precision boost to 4.0 GHz. On the Ryzen Threadripper 1920X 12-core product, the base clock is 3.5 GHz with precision boost to 4.0 GHz.”

As an aside, manufacturers informed GamersNexus at Computex that board release dates are targeted for August 10. It’s possible that this date has changed in the time since the show, but that seems to be the known target for Threadripper.

MSI GTX 1080 Ti Lightning Tear-Down

By Published July 12, 2017 at 10:15 pm

Before getting started: Our Vega FE Hybrid mod has just gone through its final data pass, and is now in video editing and writing. The content will arrive tomorrow!

That cleared away, as we know a lot of folks are excited for the mod's results, we're now focusing on the MSI GTX 1080 Ti Lightning card momentarily. This is a video card that we first covered at Computex 2017, where we detailed initial specifications, MOSFETs and power components, and the target use case of XOC or heavy overclocking. We didn't yet have information on the card internals, but our latest tear-down (embedded below) gives some insight on the card's design. There are some unique features on this card that should pose an interesting A/B test during thermal benchmarking.

This episode of Ask GN returns with our new format, frontloading the episode with some discussion topics before feeding into the user-submitted questions. As always, for consideration in next episode, please leave your comment on the YouTube playback page or in our Ask GN Discord channel for Patreon backers.

The video opens with another “gift” from NZXT, some new power draw testing, AMD Vega naming thoughts (and rushed launches with Intel & AMD), and then addresses user questions. We hop around from liquid metal to CPU and airflow topics, giving a good spread to this episode.

Watch below – timestamps below the embedded video:

Discounts on hardware are going to be hard to find with continually rising DDR4 prices and GPU shortages due to cryptocurrency mining, something we talked about in our latest Ask GN video and our How Manufacturers Feel About Mining video. As a result of these issues, hardware sales for flash-based storage and memory (and GPUs) have been difficult to come by.

This week, we found some good discounts on two gaming mice -- one from Corsair and the other from Logitech -- alongside the Western Digital Blue 1TB drives, a 620W PSU from Seasonic on Newegg, and GTX 1080s (still an option for those looking to build or upgrade a system).

Zotac's GTX 1080 Ti AMP! Extreme is one of the largest GTX 1080 Ti cards on the market, rivaling the Gigabyte Aorus Xtreme card in form factor. The card uses nearly three expansion slots, runs a long PCB and cooler, and hosts a dense aluminum heatsink with a three-fan cooler. This card runs $750 to $770, depending on if the “Core” edition is purchased. The only difference is the out-of-box clock, but all these 1080 Tis perform mostly the same in games (once solving for thermals).

For its VRM, Zotac takes a brute-force approach to the 1080 Ti, using a doubled-up 8-phase (16 phases total) with rebranded QN3107 and QN3103 MOSFETs, operating on a UP9511 in 8-phase mode. The VRM is the reason for the tall card, with two phases tucked off to the side (under a small aluminum heatsink that's isolated from all other cooling). This theoretically helps distribute the heat load better across a larger surface area, which Zotac then cools using a small aluminum fin stack that's isolated from the denser aluminum fin array. Above the VRM's isolated heatsink rests a rubber damper, which doesn't fully make contact (and is presumably to prevent scratching in the event of over-flex during installation, as it otherwise does nothing), and then the three fans.

zotac amp extreme contact 1

Above: Contactless rubber bumper above the MOSFET heatsink.

zotac 1080ti extreme 2

The card is one of the heaviest, largest cards we've looked at this generation. To give some perspective, Zotac's AMP Extreme is about 1” thicker than a 2-slot card (like the reference card), is longer than the Aorus Xtreme, and is heavy from the mass of aluminum resting atop the GPU. Learn more about the inner-workings of this card in our tear-down.

For today, we're focusing on thermals, power, and noise, as that's the biggest difference between any of these 1080 Ti cards. The gaming performance and overclocking performance, sans Kingpin/Lightning cards, is not notably different.

Our newest revisit could also be considered our oldest: the Nehalem microarchitecture is nearly ten years old now, having launched in November 2008 after an initial showing at Intel’s 2007 Developer Forum, and we’re back to revive our i7-930 in 2017.

The sample chosen for these tests is another from the GN personal stash, a well-traveled i7-930 originally from Steve’s own computer that saw service in some of our very first case reviews, but has been mostly relegated to the shelf o’ chips since 2013. The 930 was one of the later Nehalem CPUs, released in Q1 2010 for $294, exactly one year ahead of the advent of the still-popular Sandy Bridge architecture. That includes the release of the i7-2600K, which we’ve already revisited in detail.

Sandy Bridge was a huge step for Intel, but Nehalem processors were actually the first generation to be branded with the now-familiar i5 and i7 naming convention (no i3s, though). A couple features make these CPUs worth a look today: Hyperthreading was (re)introduced with i7 chips, meaning that even the oldest generation of i7s has 4C/8T, and overclocking could offer huge leaps in performance often limited by heat and safe voltages rather than software stability or artificial caps.

We’ve already endured one launch of questionable competence this quarter, looking at X299 and Intel’s KBL-X series, and we nearly escaped Q2 without another. Vega: Frontier Edition has its ups and downs – many of which we’ll discuss in a feature piece next week – but we’re still learning about its quirks. “Gaming Mode” and “Pro Mode” toggling is one of those quirks; leading into this article, it was our understanding – from both AMD representatives and from AMD marketing – that the switch would hold a relevant impact on performance. For this reason, we benchmarked for our review in the “appropriate” mode for each test: Professional applications used pro mode, like SPECviewperf and Blender. Gaming applications used, well, gaming mode. Easy enough, and we figured that was a necessary methodological step to ensure data accuracy to the card’s best abilities.

Turns out, there wasn’t much point.

A quick note, here: The immediate difference when switching to “Gaming Mode” is that WattMan, with all its bugginess, becomes available. Pro Mode does not support WattMan, though you can still overclock through third-party tools – and probably should, anyway, seeing as WattMan presently downclocks memory to Fury X speeds, as it seems to have some leftover code from the Fury X drivers.

That’s the big difference. Aside from WattMan, Gaming Mode technically also offers AMD Chill, something that Pro Mode doesn’t offer a button to use. Other than these interface changes, the implicit, hidden change would be an impact to gaming or to production performance.

Let’s briefly get into that.

This week's hardware news recap primarily focuses on industry topics, like new NAND from Toshiba, Western Digital, and a new SSD from Intel (first 64-layer VNAND SSD). A few other topics sneak in, like AMD's Ryzen Pro CPU line, a Vega reminder (in the video), the death of Lexar, and a few gaming peripherals.

Through the weekend, we'll be posting our Zotac 1080 Ti Amp Extreme review, the first part of our AMD Vega: Frontier Edition Hybrid mod, and a special benchmark feature in our highly acclaimed "Revisit" series.

In the meantime, here's the last week of HW news recapped:

Reader and viewer requests piled high after our Vega: Frontier Edition review, so we pulled the most popular one from the stack to benchmark. In today’s feature benchmark, we’re testing Vega: FE vs. the R9 Fury X at equal core clocks, resulting in clock-for-clock testing that could be loosely referred to as an “IPC” test – that’s not exactly the most correct phrasing, but does most quickly convey the intent of the endeavor. We’ll use the phrase “academic exercise” a few times in this piece, as it’s difficult to draw strong conclusions to other Vega products from this test; ultimately, GPUs simply have too many moving parts to simulate easier IPC benchmarks like you’d find on a CPU. As one limitation is resolved, another emerges – and they’re likely different on each architecture.

Regardless, we’re testing the two GPUs clock for clock to see how Vega: FE responds with the Fury X in the ring.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge