Our recent Fury X driver comparison took rumors of a disparate relationship between press and launch drivers to task, ultimately finding that no real difference existed. This testing procedure exposed us to the currently discussed “coil whine” and “pump whine” of the new R9 Fury X. Today's test seeks to determine with objectivity and confidence whether the whine is detrimental in a real-world use case.
AMD's R9 Fury X video card emits a high frequency whine when under load. We have located this noise on both of our retail units – sold under Sapphire's banner, but effectively identical to all Fury X cards – and reviewers with press samples have cited the same noise. The existence of a sound does not inherently point toward an unusably loud product, though, and must be tested in a sterile environment to determine impact to the user experience. The noise resembles coil whine, for those familiar with the irritating hum, but is actually an emission from the high-speed pump on the Fury X. This relegates the noise to what is ultimately a mechanical flaw in the engineering rather than something electrical, as coil whine would suggest.
Our R9 Fury X analysis is still forthcoming, but we interrupted other tests to quickly analyze driver performance between the pre-release press drivers and launch day consumer drivers.
All testing was conducted using a retail Fury X, as we were unable to obtain press sampling. This benchmark specifically tests performance of the R9 Fury X using the B8, B9, and release (15.15.1004) drivers against one another.
The purpose for this test is to demystify some rumors that the Fury X would exhibit improved performance with the launch day drivers (15.15.1004), with some speculation indicating that the press drivers were less performant.
Following our initial review of AMD's new R9 390 ($330) and R9 380 ($220) video cards, we took the final opportunity prior to loaner returns to overclock the devices. Overclocking the AMD 300 series graphics cards is a slightly different experience from nVidia overclocking, but remains methodologically the same in approach: We tune the clockrate, power, and memory speeds, then test for stability.
The R9 390 and R9 380 are already pushed pretty close to their limits. The architectural refresh added about 50MHz to the operating frequency of each card, with some power changes and memory clock changes tacked-on. The end result is that the GPU is nearly maxed-out as it is, but there's still a small amount of room for overclocking play. This overclocking guide and benchmark for the R9 390 & R9 380 looks at the maximum clockrate achievable through tweaking.
All these tests were performed with Sapphire's “Nitro” series of AMD 300 cards, specifically using the Sapphire Nitro R9 390 Tri-X and Sapphire Nitro R9 380 Dual-X cards. Results will be different for other hardware.
It's been a number of years since we posted an in-depth look at CPU coolers. Our 2012 CPU Cooler Anatomy post explained the basics of air cooler design, highlighting the use of capillary action within copper heatpipes to conduct heat from a copper coldplate. Liquid coolers function more similarly to a car's radiator system and yield significantly more efficient thermal performance, but also call some common practices into question – like the efficacy of copper versus aluminum.
The basics indicate that copper does, by any and all scientific measure, thoroughly trounce aluminum when it comes to thermal dissipation potential: Copper cools at about 400 Watts per meter Kelvin at 25C (401W/mK) and aluminum is around half that, sitting at 205W/mK at 25C. That's nearly a 2x difference; for perspective, most stock thermal compound is in the 4-5.6W/mK range, with air (no thermal compound) at ~0.024W/mK.
Over the course of our recent GTX 980 Ti review, we encountered a curious issue with our primary PCI Express port. When connecting graphics cards to the first PCI-e slot, the card wouldn't detect and resolution would be stunted to lower values. Using one of the other slots bypassed this issue, but was unacceptable for multi-GPU configurations – something we eventually tested.
Multi-GPU configurations have grown in reliability over the past few generations. Today's benchmark tests the new GeForce GTX 980 Ti in two-way SLI, pitting the card against the GTX 980 in SLI, Titan X, and other options on the bench. At the time of writing, a 295X2 is not present for testing, though it is something we hope to test once provided.
SLI and CrossFire have both seen a redoubled effort to improve compatibility and performance in modern games. There are still times when multi-GPU configurations won't execute properly, something we discovered when testing the Titan X against 2x GTX 980s in SLI, but it's improved tremendously with each driver update.
Our initial review of the $650 GTX 980 Ti, published just over twelve hours prior to this post, mentioned an additional posting focusing on the card's overclocking headroom. The GTX 980 Ti runs GM200, the same GPU found in nVidia's Titan X video card, and is driven by Maxwell's new overclocking ruleset.
Maxwell, as we've written in a how-to guide before, overclocks differently from other architectures. NVidia's newest design institutes a power percent target (“Power % Target”) that increments power provisioning to the die to grant OC headroom. Unfortunately, this metric can't be exceeded beyond what the BIOS natively allows (without a hack, anyway), and means that we're sharing watts between the core clock, memory clock, and voltage increase. Overclocking on Maxwell offers some granularity without making things too complicated, though it's not until we get hands-on with board partner video cards that we'll know the true OC ceiling of the 980 Ti.
This post showcases our GTX 980 Ti initial overclock on the reference cooler, yielding a considerable framerate gain in game benchmarks.
This short posting comes following a reader question pertaining to motherboard selection. Some recent Intel-based motherboards now offer support for USB3.1, which operates at an impressive 10Gbps (equivalent to Thunderbolt 1.0) and uses an insertion-agnostic header. The speed boost is easily utilized when driving external SSDs, which will throttle on the 4.8Gbps cap of USB3.0 – especially after overhead.
MSI was the first to introduce USB3.1 on motherboards earlier this year, demoing the Krait white/black boards at CES 2015. Other manufacturers have moved to offer firmware updates on existing platforms for “unlocking” USB3.1. ASUS is among these, shipping its X99-S motherboards with a natively-supported USB3.1 add-on card.
“Tessellation” isn't an entirely new technology – Epic and Crytek have been talking about it for years, alongside nVidia's own pushes – but it's been getting more visibility in modern games. GTA V, for instance, has a special tessellation toggle that can be tweaked for performance. Like most settings found in a graphics menu, the general understanding of tessellation is nebulous at best; it's one of those settings that, perhaps like ambient occlusion or anti-aliasing, has a loose tool-tip of a definition, but doesn't get broken-down with much depth.
As part of our efforts to expand our game graphics settings glossary, we sat down with Epic Games Senior Technical Artist Alan Willard, a 17-year veteran of the company. Willard provided a basic overview of tessellation, how it is used in game graphics, GPU load and performance, and implementation techniques.
NVidia's latest addition to the Titan family diverges from its predecessors' market objectives. Previous Titan cards were fully double-precision enabled, ensuring marketability as affordable production and simulation cards that, by nature, also served reasonably as gaming cards. Because double-precision is detrimental to gaming performance, the original Titan and current Titan Z can be set to “single-precision mode” to better game, but aren't targeted as the “best gaming video card” out there. The Titan X is; in fact, that's exactly what nVidia calls it – the best single-GPU on the market. The selection of these words is intentional, ruling-out dual-GPU single cards (like the 295X2 or 690) and multi-card configurations (like what we're testing today).
Because the Titan X is heavily marketed as a gaming solution, something reinforced by offering just 1/32 of SP in DP performance, we decided to perform a value comparison between 2xGTX 980s in SLI. The SLI configuration offers indisputably powerful raw computational output, but has a smaller memory capacity than the Titan X's 12GB single-GPU pool.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.