All the pyrotechnics in the world couldn't match the gasconade with which GPU & CPU vendors announce their new architectures. You'd halfway expect this promulgation of multipliers and gains and reductions (but only where smaller is better) to mark the end-times for humankind; surely, if some device were crafted to the standards by which it were announced, The Aliens would descend upon us.
But, every now and then, those bombastic announcements have something behind them – there's substance there, and potential for an adequately exciting piece of technology. NVidia's debut of consumer-grade Pascal architecture initializes with GP104, the first of its non-Accelerator cards to host the new 16nm FinFET process node from TSMC. That GPU lands on the GTX 1080 Founders Edition video card first, later to be disseminated through AIB partners with custom cooling or PCB solutions. If the Founders Edition nomenclature confuses you, don't let it – it's a replacement for nVidia's old “Reference” card naming, as we described here.
Anticipation is high for GP104's improvements over Maxwell, particularly in the area of asynchronous compute and command queuing. As the industry pushes ever into DirectX 12 and Vulkan, compute preemption and dynamic task management become the gatekeepers to performance advancements in these new APIs. It also means that LDA & AFR start getting pushed out as frames become more interdependent with post-FX, and so suddenly there are implications for multi-card configurations that point toward increasingly less optimization support going forward.
Our nVidia GeForce GTX 1080 Founders Edition review benchmarks the card's FPS performance, thermals, noise levels, and overclocking vs. the 980 Ti, 980, Fury X, and 390X. This nearing-10,000-word review lays-out the architecture from an SM level, talks asynchronous compute changes in Pascal / GTX 1080, provides a quick “how to” primer for overclocking the GTX 1080, and talks simultaneous multi-projection. We've got thermal throttle analysis that's new, too, and we're excited to show it.
The Founders Edition version of the GTX 1080 costs $700, though MSRP for AIBs starts at $600. We expect to see that market fill-in over the next few months. Public availability begins on May 27.
First, the embedded video review and specs table:
A mysterious briefcase showed up at GN labs today, bearing the above blackened metal triangle. On the triangle is emblazoned a code, which we entered into the orderof10.com redemption page. The box is branded with a “10” enclosed by a triangle, the same as seen above. Entering the triangle's code into the webpage unlocked our “COMPUTE” piece (Leibniz); the rest of the pieces can be found here. We know that we've got COMPUTE, SlashGear's Chris Barr has Vision, Jack Pattillo of Rooster Teeth has a piece, and Devindra Hardawar of Engadget has a piece.
I tasked GN's Patrick Lathan with assisting in decoding the cryptic message. He's our “puzzle guy,” known recently for reviewing Johnathan Blow's The Witness, and has already made major progress that isn't contained in our below video.
We have updated this article with advancements below.
It's been a while since our last card-specific GTX 980 review – and that last one wasn't exactly glowing. Despite the depressing reality that 6 months is “old” in the world of computer hardware, the GTX 980 and its GM204 GPU have both remained near the top of single-GPU benchmarks. The only single-GPU – meaning one GPU on the card – AIC that's managed to outpace the GTX 980 is the Titan X, and that's $1000.
This review looks at PNY's GTX 980 XLR8 Pro ($570) video card, an ironclad-like AIC with pre-overclocked specs. Alongside the XLR8 Pro graphics card, we threw-in the reference GTX 980 (from nVidia) and MSI's Gaming 4G GTX 980 (from CyberPower) when benchmarking.
NVIDIA Shows 2018 GPU Roadmap, Revisits PASCAL & 3D Memory at GTC 2015
Last year's GTC event in San Jose, California saw the unveil of nVidia's architecture following Maxwell: Pascal. We wrote about Pascal at the time, but very little was revealed about the new architecture. This year's GTC keynote presentation by nVidia CEO Jen-Hsun Huang revisited Pascal architecture and the nVidia GPU roadmap through 2018.
NVIDIA GeForce GTX 960 GPU Benchmark vs. 760, 970, R9 285 – A $200 Juggernaut
It's official: The price gap between the GTX 960 and GTX 970 is large enough to drive a Ti through. NVidia's new GeForce GTX 960 2GB graphics card ships at $200, pricing it a full $50 cheaper than the GTX 760's launch price. The immediate competition would be AMD's R9 285, priced almost equivalently.
NVidia's GTX 960 is intended to target the market seeking the best video card for the money – a segment that both AMD and nVidia call the “sweet spot” – and is advertised as capable of playing most modern games on high settings or better. The GTX 960 uses a new Maxwell GPU, called the GM206, for which the groundwork was laid by the GTX 980's GM204 GPU. In our GTX 980 review, we mentioned that per-core performance and per-watt performance had increased substantially, resulting in a specs listing that exhibits a lower core count and smaller memory interface. AMD has leveraged these number changes in recent marketing outreaches, something we'll discuss in the conclusion.
This GeForce GTX 960 review tests the new ASUS Strix 960 video card against the 970, 760, R9 285, & others. The benchmark analyzes GTX 960 FPS performance in titles like Far Cry, Assassin's Creed, EVOLVE, and other modern titles. The GTX 960 is firmly designed for 1080p gaming, which is where the vast majority of monitors currently reside.
Hands-On: EVGA Kingpin 980 Extreme OC Card & Liquid Cooled GTX 980
Maxwell architecture has effectively been solidified in the market at this point, a statement firmly reinforced by the onslaught of aftermarket high-end overclocking cards beginning to ship from various board partners. EVGA's CES 2015 suite spotlighted its new KINGPIN version of the GTX 980 alongside a CLC Hydro-Copper version of the GTX 980, both allowing additional OC headroom and other features.
NVidia Boost 2.0 & Boost Clock Throttling when Overclocking
Monday, 22 December 2014This article topic stems from a recent reader email. Our inquisitive reader was curious as to the nature of variable clock speeds, primarily asking about why GPUs (specifically nVidia's) would sometimes log slower clock speeds than the overclock settings; similarly, speeds are occasionally reported higher than even what a user OC reflects.
Variable clock speeds stem from boost settings available on both AMD and nVidia architecture, but each company's version differs in execution. This brief post will focus on nVidia Boost 2.0 and why it throttles clock speeds in some environments. None of this is news at this point, but it's worth demystifying.
GPU overclocking changed with the release of Maxwell's updated architecture. The key aspects remain the same: Increase the clock-rate, play with voltage, increase the memory clock, and observe thermals; new advancements include power target percent and its tie to TDP. We recently showed the gains yielded from high overclocks on the GTX 980 in relation to Zotac's GTX 980 Extreme and the reference card and, in some instances, the OC produced better performance than stock SLI pairing.
This GTX 980 overclocking tutorial will walk through how to overclock nVidia's Maxwell architecture, explain power target %, voltage, memory clock, and more.
Do Not Buy Zotac's GTX 980 Extreme - We Get Higher Overclocks on Reference | Benchmark
The Zotac GTX 980 Extreme ($610) is the most disappointing, saddening attempt at a high-end overclocking device I've ever seen. I've never been so resonantly disheartened by a review product. I've also never seen an aftermarket product perform worse than the reference model while being priced more than 10% higher. The added cost is justified – on paper – by several factors, including a better cooler and higher bin (better GM204).
Testing Zotac's GTX 980 Extreme overclocking card began with excitement and anticipation, rapidly decaying as despair and uncertainty took hold. When the card failed to overclock higher than my reference GTX 980 ($550), I first suspected error on my end – and proved that suspicion wrong – and then went to Zotac with strong emphasis that the BIOS needed a serious overhaul. A BIOS update should have been quick and easy if no hidden problems existed in the hardware, as other video card manufacturers have proven in the past. We published all of this about a week ago, firmly stating that no one buy the GTX 980 Extreme until we could revisit the topic.
We're revisiting it.
A Warning on Zotac's GTX 980 Extreme – Severe Voltage Limitations Prevent OC
We've been playing around with Zotac's GTX 980 Extreme for about a week now. The story of Zotac in this launch cycle is sort of an interesting one. The company has been making mini-PCs (“ZBOX”) and nVidia video cards for many years now, but they've managed to remain in an unremarkable B-list / C-list of vendors in the GPU market. I don't think many would really disagree with the statement that Zotac has historically not been the first company that pops into mind when looking for a new GeForce card. But all of that changed with the GTX 980 and Game24, where we caught our first glimpses of a revitalized effort to capture the limelight.
From a design standpoint, the GTX 980 Amp! Extreme is positioned to be the best overclocking GM204 device on the market, short of adding liquid. It will compete with K|NGP|N on air. The triple-fan setup uses dual flanking exhaust and a single, central intake fan, with a massive copper coldplate mounted to the semiconductor, stemming from which are four heatpipes that feed into an aluminum sink. This will help cool the ~171W TDP device that can theoretically (2x8-pin) consume upwards of 300W (or more) when overclocked correctly. Additional aluminum is available near the somewhat over-engineered VRM, making for what should be cooler phases when placed under load. The problem is just that, though – we can't place the card under load. Yet. We've been trying for an entire week now, and I think we've deduced the heart of the issue.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.