The Fury X has been a challenging video card to review. This is AMD's best attempt at competition and, as it so happens, the card includes two items of critical importance: A new GPU architecture and the world's first implementation of high-bandwidth memory.

Some system builders may recall AMD's HD 4870, a video card that was once a quickly-recommended solution for mid-to-high range builds. The 4870 was the world's first graphics card to incorporate the high-speed GDDR5 memory solution, reinforcing AMD's position of technological jaunts in the memory field. Prior to the AMD acquisition, graphics manufacturer ATI designed the GDDR3 memory that ended up being used all the way through to GDDR5 (GDDR4 had a lifecycle of less than a year, more or less, but was also first instituted on ATI devices).

Our R9 Fury X analysis is still forthcoming, but we interrupted other tests to quickly analyze driver performance between the pre-release press drivers and launch day consumer drivers.

All testing was conducted using a retail Fury X, as we were unable to obtain press sampling. This benchmark specifically tests performance of the R9 Fury X using the B8, B9, and release (15.15.1004) drivers against one another.

The purpose for this test is to demystify some rumors that the Fury X would exhibit improved performance with the launch day drivers (15.15.1004), with some speculation indicating that the press drivers were less performant.

 

AMD's most recent video card launch was September of 2014, introducing the R9 285 ($243) on the slightly updated Tonga GPU. Tonga was laterally imposed to take the place of the Tahiti products, namely the HD 7970 and its refresh, the R9 280. The Radeon 7970 video card shipped in late 2011 on the Tahiti GPU, a die using TSMC's still-fabbed 28nm process, and was refreshed as the R9 280, then updated, improved, and refreshed again as the Tonga-equipped R9 285. At its core, the 285 would offer effectively identical on-paper specs (with some changes, like a 256-bit memory bus against the 384-bit predecessor), but introduced a suite of optimization that yielded marginally improved performance over the R9 280.

All of this is to say that it's been a number of years since AMD has introduced truly new architecture. Tahiti's been around four years now, Hawaii shipped in 2013 and was a node refresh of Tahiti (more CUs, ROPs, and geometry / rasterizer processors), and Fiji – the anticipated new GPU – won't ship for a short bit longer. Filling that space is another refresher line, the Radeon 300 series of video cards.

AMD's lull in technological advancement on the hardware side has allowed competitor nVidia to increase competition in some unchallenged market segments, like the high-end with the GTX 980 Ti ($650) and mid-range with the GTX 960 ($200). The long-awaited R9 300 series video cards have finally arrived, though, and while they aren't hosting new GPUs or deploying a smaller fab process, the cards do offer marginally increased clockrates and other small changes.

This review benchmarks the AMD R9 390 and AMD R9 380 graphics cards against the preceding R9 280, R9 290(X), GTX 960, and other devices. The R7 370 and R7 360 also launch today, but won't be reviewed here.

Working with the GTX 980 Ti ($650) proved that nVidia could supplant its own device for lower cost, limiting the use cases of the Titan X primarily to those with excessive memory requirements.

In our GTX 980 Ti overclocking endeavors, it was quickly discovered that the card encountered thermal bounds at higher clockrates. Driver failures and device instability were exhibited at frequencies exceeding ~1444MHz, and although a 40% boost in clockrate is admirable, it's not what we wanted. The outcome of our modest overclocking effort was an approximate ~19% performance gain (measured in FPS) for a selection of our benchmark titles, enough to propel the 980 Ti beyond the Titan X in gaming performance. Most games cared more about raw clock speed of the lower CUDA-count 980 Ti than the memory capacity of the TiX.

Multi-GPU configurations have grown in reliability over the past few generations. Today's benchmark tests the new GeForce GTX 980 Ti in two-way SLI, pitting the card against the GTX 980 in SLI, Titan X, and other options on the bench. At the time of writing, a 295X2 is not present for testing, though it is something we hope to test once provided.

SLI and CrossFire have both seen a redoubled effort to improve compatibility and performance in modern games. There are still times when multi-GPU configurations won't execute properly, something we discovered when testing the Titan X against 2x GTX 980s in SLI, but it's improved tremendously with each driver update.

Following unrelenting rumors pertaining to its pricing and existence, nVidia's GTX 980 Ti is now an officially-announced product and will be available in the immediate future. The GTX 980 Ti was assigned an intensely competitive $650 price-point, planting the device firmly in a position to usurp the 780 Ti's positioning in nVidia's stack.

The 980 Ti redeploys the GTX 980's “The World's Most Advanced GPU” marketing language, a careful indication of single-GPU performance against price-adjacent dual GPU solutions. This video card takes the market positioning of the original GTX 780 Ti Kepler device in the vertical, resulting in the following bottom-up stack:

Until Pascal arrives, nVidia is sticking with its maturing Maxwell architecture. The GTX 980 Ti uses the same memory subsystem and compression technology as previous Maxwell devices.

This GTX 980 Ti review benchmarks the video card's performance against the GTX 980, Titan X, 780 Ti, 290X, and other devices, analyzing FPS output across our suite of test bench titles. Among others tested, the Witcher 3, GTA V, and Metro: Last Light all make a presence.

Johan Andersson, a Frostbite developer under EA, today posted a photograph of AMD's new liquid-cooled video card. It is already known that the R9 300-series video cards are due for release in the summer – likely June – and that the flagship devices will be liquid cooled, but little has been officially announced. The pictured Pirate Islands card is assumed to be a 390X.

Following the launch of 2GB cards, major board partners – MSI and EVGA included – have begun shipment of 4GB models of the GTX 960. Most 4GB cards are restocking availability in early April at around $240 MSRP, approximately $30 more expensive than their 2GB counterparts. We've already got a round-up pending publication with more in-depth reviews of each major GTX 960, but today, we're addressing a much more basic concern: Is 4GB of VRAM worth it for a GTX 960?

This article benchmarks an EVGA GTX 960 SuperSC 4GB card vs. our existing ASUS Strix GTX 960 2GB unit, testing each in 1080, 1440p, and 4K gaming scenarios.

Last year's GTC event in San Jose, California saw the unveil of nVidia's architecture following Maxwell: Pascal. We wrote about Pascal at the time, but very little was revealed about the new architecture. This year's GTC keynote presentation by nVidia CEO Jen-Hsun Huang revisited Pascal architecture and the nVidia GPU roadmap through 2018.

Our full GTX Titan X ($1000) FPS benchmarks are pending publication following nVidia's GPU Technology Conference (GTC), though CEO Jen-Hsun Huang's keynote today marks the official and immediate launch of the GTX Titan X video card. The keynote was live-streamed.

Moments ago, the new GTX Titan X's official specifications were unveiled at GTC 2015, to include core count, the GPU, and price. We've compared the Titan X's spec to the GTX 980, & 780 Ti below. The Titan X will be available here (Newegg) at some point. The card should be available via NVIDIA's official store momentarily.

Page 1 of 8

  VigLink badge