Buildzoid's latest contribution to our site is his analysis of the GTX 1080 Ti Founders Edition PCB and VRM, including some additional thoughts on shunt modding the card for additional OC headroom. We already reviewed the GTX 1080 Ti here, modded it for increased performance with liquid cooling, and we're now back to see if nVidia's reference board is any good.
This time, it turns out, the board is seriously overbuilt and a good option for waterblock users (or users who'd like to do a Hybrid mod like we did, considering the thermal limitations of the FE cooler). NVidia's main shortcoming with the 1080 Ti FE is its FE cooler, which limits clock boosting headroom even when operating stock. Here's Buildzoid's analysis:
We’ve fixed the GTX 1080 Ti Founders Edition ($700) card. As stated in the initial review, the card performed reasonably close to nVidia’s “35% > 1080” metric when at 4K resolutions, but generally fell closer to 25-30% faster at 4K. That’s really not bad – but it could be better, even with the reference PCB. It’s the cooler that’s holding nVidia’s card back, as seems to be the trend given GPU Boost 3.0 + FE cooler designs. A reference card is more versatile for deployment to the SIs and wider channel, but for our audience, we can rebuild it. We have the technology.
“Technology,” here, mostly meaning “propylene glycol.”
The new drivers mostly prime the RX 400 series cards for the upcoming Mass Effect launch—most demonstrably the RX 480 8GB, of which AMD notes a 12% performance increase when compared to drivers 17.3.1. Additionally, the drivers will add an “AMD optimized” tessellation profile.
Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.
This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.
Here's part 1:
The GTX 1080 Ti posed a fun opportunity to roll-out our new GPU test bench, something we’ve been working on since end of last year. The updated bench puts a new emphasis on thermal testing, borrowing methodology from our EVGA ICX review, and now analyzes cooler efficacy as it pertains to non-GPU components (read: MOSFETs, backplate, VRAM).
In addition to this, of course, we’ll be conducting a new suite of game FPS benchmarks, running synthetics, and preparing for overclocking and noise. The last two items won’t make it into today’s content given PAX being hours away, but they’re coming. We will be starting our Hybrid series today, for fans of that. Check here shortly for that.
If it’s not obvious, we’re reviewing nVidia’s GTX 1080 Ti Founders Edition card today, follow-up to the GTX 1080 and gen-old 980 Ti. Included on the benches are the 1080, 1080 Ti, 1070, 980 Ti, and in some, an RX 480 to represent the $250 market. We’re still adding cards to this brand new bench, but that’s where we’re starting. Please exercise patience as we continue to iterate on this platform and build a new dataset. Last year’s was built up over an entire launch cycle.
With nVidia’s recent GTX 1080Ti announcement and GTX 1080 price cut, graphics cards have seen reductions in cost this week. As stated in our last sales post, hardware sales are hard to come by right now, but we have still found some deals worth noting. We found an RX 480 8GB for $200, and a GTX 1080 for $500. DDR4 prices are still high, but some savings can be had on a couple of kits of DDR4 by G.SKILL.
The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?
In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.
While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.
NVidia just opened the floodgate on its GTX 1080 Ti video card, the Pascal-based mid-step between the GTX 1080 and GTX Titan X. The 1080 Ti opens up SMs over the GTX 1080, now totaling 28 SMs over the 1080’s 20 SMs, resulting in 3584 total FP32 CUDA cores on the GTX 1080 Ti. Simultaneous multiprocessor architecture remains the same – Pascal hasn’t changed, here – leaving us with primary changes in the memory subsystem.
The GTX 1080 Ti will host 11GB of GDDR5X memory – not HBM2 – with a speed of 11Gbps. This is boosted over the GTX 1080’s 10Gbps GDDR5X memory speeds, resultant of work done by memory supplier Micron to clean the signal. The heavy transition cluttering of early G5X iterations have been reduced, allowing a cleaner signal in the GDDR5X cells without data corruption concerns. We’ll have some news below on how this also relates to existing Pascal cards.
AMD was clear from the beginning of today’s Capsaicin and Cream event that it was not a Vega product launch (the only 100% new Vega news was that the GPU would be officially branded “Vega”), but demos of the previously mentioned technologies like high-bandwidth cache controller and rapid-packed math were shown.
After some brief discussion about exactly how much alcohol was consumed at last year’s afterparty, the Vega portion of the presentation covered three major points: HB Cache Controller, Rapid Packed Math, and Virtualization.
“Virtualization” in this context means the continued effort (by both AMD and NVIDIA) to make server-side gaming viable. AMD has partnered with LiquidSky and will be using Vega’s “Radeon Virtualized Encode” feature to make streaming games (hopefully) as latency-free as possible, though limitations on internet service still abound.
We're traveling for an event today, which means the bigger review and feature content is on hold until we're back in the lab.
The last few days have yielded enough intrigue and hardware news to warrant a separate content piece, anyway. AMD and nVidia, as usual, have largely stolen the show with head-to-head events on February 28, working to snipe coverage from one another. Also on the video card front, JPR reports that add-in board sales have increased for 4Q16, and that attach rate of AIB cards to systems has increased year-over-year. Somewhat related, new RX 460 cards from MSI offer a half-height form factor option (pricing TBD) with the 896 core version of the Polaris 11 chip.