AMD today followed-up its Radeon RX 480 Polaris announcement with the unveil of its RX 470 and RX 460 graphics cards. Quickly recapping, the RX 480 will ship with >5TFLOPS of compute performance (depending on pre-OC or other specs) and sells for ~$200 MSRP at 4GB, or more than that for 8GB – we're guessing $230 to $250 for most AIB cards. Now, with the announcement of the RX 470 and RX 460, AMD has opened up the low-end of the market with a new focus on “cool and efficient” graphics solutions. Coming out of the company which used to associate itself with volcanic islands, high-heat reference coolers (remedied with the Fiji series), and high power draw, the Polaris architecture promises a more power/thermal-conscious GPU.

After a 1-2 week break through our Asia trip, which included factory tours in China, Taiwan, and then Computex proper, we're back with another episode of Ask GN. This time, we address questions on rumors of the 1080 Ti's release window, the impact of overclocking on component lifespan, and the importance of CPUs in an era of burgeoning GPU loading.

The week's questions are listed with timestamps below the video embed. Be sure to check out last week's episode for more of this content style. Leave questions on the Ask GN YouTube page for inclusion in next week's episode!

AMD's 14nm FinFET Radeon RX480 was just announced at Computex, using the new Polaris 10 architecture. The AMD Radeon RX480 GPU uses Polaris 10 architecture to deliver >5TFLOPS of Compute for $200, at 150W TDP, and ships in SKUs of 4GB & 8GB GDDR5. We have not confirmed if the 8GB model costs more; the exact language was “RX 480 set to drive premium VR experiences into the hands of millions of consumers, priced from just $199.”

“From,” of course, means “starting at” – so it could be that the 8GB model costs more. Regardless, AMD's firmly entered the mid-range market with its 8GB RX480, landing where the R9 380X and GTX 960 4GB presently rest. (Update: We emailed and confirmed that the 4GB model is $200. The 8GB model is not yet finalized for pricing -- probably $250+).

 

AMD's is rumored to be skipping on the high-end market with Polaris architectures 10 & 11, likely aiming to fill that demand with Vega instead. Vega is on the roadmap for public delivery later in 2016.

The GTX 1080's epochal launch all but overshadowed its cut-down counterpart – that is, until the price was unveiled. NVidia's GTX 1070 is promised at an initial $450 price-point for the Founders Edition (explained here), or an MSRP of $380 for board partner models. The GTX 1070 replaces nVidia's GTX 970 in the vertical, but promises superior performance to previous high-end models like the 980 and 980 Ti; we'll validate those claims in our testing below, following an initial architecture overview.

The GeForce GTX 1070 ($450) uses a Pascal GP104-200 chip. The architecture is identical to the GTX 1080 and its GP104-400 GPU, but cuts-down on SM presence (and core count) to create a mid-range version of the new 16nm FinFET architecture. This new node from TSMC is nearly half the size of Maxwell's 28nm Planar process, and switches the company over to FinFET transistor architecture for reduced power leakage and overall improved performance-per-watt efficiency. The trend is symptomatic of an industry trending toward ever-smaller devices with a greater concern on the power envelope, and has been reflected in nVidia's architectures since Fermi (GTX 400 series running notoriously hot) and AMD's since Fiji (sort of – Polaris claims to make a bigger push in this direction). On the CPU side, Intel has been driving this trend for several generations now, its 10nm process making promises to further extend mobile device endurance and transistor density.

Had investigators walked into our Thermal-Lab-And-Video-Set Conglomerate, they'd have been greeted with a horror show worthy of a police report: Two video cards fully dissected – one methodically, the other brutally – with parts blazoned in escalating dismemberment across the anti-static mat.

Judging by some of the comments, you'd think we'd committed a crime by taking apart a new GTX 1080 – but that's the job. Frankly, it didn't really matter if the thing died in the process. We're here to make content and test products for points of failure and success, not to preserve them.

The test results are in from our post-review DIY project, which started here. Our goal was a simple one: As a bit of a decompression project after our 9000-word analysis of nVidia's GeForce GTX 1080 Founders Edition, we decided to tear-down the GTX 1080, look underneath, and throw a liquid block onto the exposed die. The “Founders Edition” of the GTX 1080 is effectively a reference model, and as such, it'll quickly be outranked by AIB partner cards with regard to cooling and OC potential. The GTX 1080 overclocks reasonably well – we were hitting ~2025-2050MHz with the FE model – but it still feels limited. That limitation is a mix of power limit and thermal throttling.

Our testing discovered that thermal throttles occur at precisely 82C. Each time the card hits 82C absolute, the clock-rate dips and produces a marginal impact to frametimes and framerate. We also encountered clock-rate stability issues over long burn-in periods, and would have had to further step-down the OC to accommodate the 82C threshold. Even when configuring the VRM blower fan to 100% speed, limitations were encountered – but it did perform better, just with the noise levels of a server fan (~60dB, in our tests). That's not really acceptable for a real-world use case. Liquid will bring down noise levels, help sustain higher clock-rates at those noise levels, and keep thermals well under control.

The video (Part 3) is below. This article will cover the results of our DIY liquid-cooled GTX 1080 'Hybrid' vs. the Founders Edition card, including temperatures, VRM fan RPM, overclocking, stability, and FPS. Our clocks vs. time charts are the most interesting.

In the process of tearing apart the new nVidia GTX 1080 video card, we discovered solder points for an additional 8-pin power header positioned at a 90-degree corner to the original 6-pin header. This is shown in our tear-down video (embedded at the bottom of this post), but we've got a photo above, too.

We're building our own GTX 1080 Hybrid. We're impatient, and the potential for further improved clock-rate stability – not that the 1080 isn't already impressively stable – has drawn us toward a DIY solution. For this GTX 1080 liquid cooling mod, we're tearing apart $1300 worth of video cards: (1) the EVGA GTX 980 Ti Hybrid, which long held our Best of Bench award, is being sacrificed to the Pascal gods, and (2) the GTX 1080 Founders Edition shall be torn asunder, subjected to the whims of screwdrivers and liquid cooling.

Here's the deal: We ran a thermal throttle analysis in our 9000-word review of the GTX 1080 (read it!). We discovered that, like Maxwell before it, consumer Pascal seems to throttle its frequency as temperatures reach and exceed ~82C. Each hit at 82C triggered a frequency fluctuation of ~30~70MHz, enough to create a marginal hit to frametimes. This only happened a few times through our first endurance test, but we've conducted more – this time with overclocks applied – to see if there's ever a point at which the throttling goes from “welcomed safety check” to something less desirable.

Turns out, the thermal throttling impacts our overclocks, and it's limited the potential of a GPU that's otherwise a strong overclocker. And so begins Part 1 of our DIY GTX 1080 build log – disassembly; we're taking apart the GTX 1080, tearing it down to the bones for a closer look inside.

All the pyrotechnics in the world couldn't match the gasconade with which GPU & CPU vendors announce their new architectures. You'd halfway expect this promulgation of multipliers and gains and reductions (but only where smaller is better) to mark the end-times for humankind; surely, if some device were crafted to the standards by which it were announced, The Aliens would descend upon us.

But, every now and then, those bombastic announcements have something behind them – there's substance there, and potential for an adequately exciting piece of technology. NVidia's debut of consumer-grade Pascal architecture initializes with GP104, the first of its non-Accelerator cards to host the new 16nm FinFET process node from TSMC. That GPU lands on the GTX 1080 Founders Edition video card first, later to be disseminated through AIB partners with custom cooling or PCB solutions. If the Founders Edition nomenclature confuses you, don't let it – it's a replacement for nVidia's old “Reference” card naming, as we described here.

Anticipation is high for GP104's improvements over Maxwell, particularly in the area of asynchronous compute and command queuing. As the industry pushes ever into DirectX 12 and Vulkan, compute preemption and dynamic task management become the gatekeepers to performance advancements in these new APIs. It also means that LDA & AFR start getting pushed out as frames become more interdependent with post-FX, and so suddenly there are implications for multi-card configurations that point toward increasingly less optimization support going forward.

Our nVidia GeForce GTX 1080 Founders Edition review benchmarks the card's FPS performance, thermals, noise levels, and overclocking vs. the 980 Ti, 980, Fury X, and 390X. This nearing-10,000-word review lays-out the architecture from an SM level, talks asynchronous compute changes in Pascal / GTX 1080, provides a quick “how to” primer for overclocking the GTX 1080, and talks simultaneous multi-projection. We've got thermal throttle analysis that's new, too, and we're excited to show it.

The Founders Edition version of the GTX 1080 costs $700, though MSRP for AIBs starts at $600. We expect to see that market fill-in over the next few months. Public availability begins on May 27.

First, the embedded video review and specs table:

In this seventeenth episode of Ask GN, we discuss component selection and coil whine avoidance, GPU utilization and its seeming lock to 99% load, fan speeds, wireless mice, and more. Timestamps for all posted questions are below. As always, leave comments on the video page for potential inclusion in the next episode of Ask GN.

A quick thanks to Joe Vivoli of NVIDIA for helping us determine the answer to the first question. One viewer asked, paraphrased, “why does it seem like my GPU only hits 99% load, meanwhile the CPU will hit 100% load?” This was an excellent question for which we did not have an answer, but it seems that, after consultation with engineers, it effectively boils-down to a rounding issue in the software.

Page 1 of 12

  VigLink badge