Hardware Guides

Thermal testing for cases, coolers, CPUs, and GPUs requires very careful attention to methodology and test execution. Without proper controls for ambient or other variables within a lab/room environment, it's exceedingly easy for tests to vary to a degree that effectively invalidates the results. Cases and coolers are often fighting over one degree (Celsius) or less of separation, so having strict tolerances for ambient and active measurements of diodes and air at intake/exhaust helps ensure accurate data.

We recently put our methodology to the test by borrowing time on a local thermal chamber – a controlled environment – and checking our delta measurements against it. GN's thermal testing is conducted in a lab on an open-loop HVAC system; we measure ambient constantly (second-to-second) with thermocouples, then subtract those readings from diode readings to create a delta value. For the thermal chamber, we performed identical methodology within a more tightly controlled environment. The goal was to determine if the delta value (within the chamber) paralleled the delta value achieved in our own (open air) labs, within reasonable margin of error; if so, we'd know our testing is fully accurate and properly accounts for ambient and other variables.

The chamber used has climate control functions that include temperature settings. We set the chamber to match our normal lab temps (20C), then checked carefully for where the intake and exhaust are setup within the chamber. This particular unit has slow, steady intake from the top that helps circulate air by pushing it down to an exhaust vent at the bottom. It'd just turn into an oven, otherwise, as the system's rising temps would increase ambient. This still happens to some degree, but a control module on the thermal chamber helps adjust and regulate the 20C target as the internal temperature demands. It's the control module which is the most expensive, too; our chaperone told us that the units cost upwards of $10,000 – and that's for a 'budget-friendly' approach.

 

NVidia and AMD had a bit of a back-and-forth with day-one Vulkan announcements, with nVidia taking a few shots at AMD's beta driver launch. “OpenGL Next” became Vulkan, which consumed parts of AMD's Mantle API in its move toward accommodating developers with lower-level access to hardware. The phrase “closer to the metal” applies to Mantle, Vulkan, and DirectX 12 in similar capacities; these APIs bypass overhead created by DirectX 11 and more directly tune for GPU hardware, off-loading parallelized tasks from the CPU and to the GPU. In a previous interview with Star Citizen's Chris Roberts, we talked about some of the developer side of Vulkan & DirectX 12 programming, learning that it's not as easy as just 'throwing an API call' switch.

For this benchmark, we ran Vulkan vs. DirectX 11 (D3D11) benchmarks in the Talos Principle to determine which API is presently 'best.' There's a giant disclaimer here, though, and we've dedicated an entire section of the article to that (see: “Read This First!”). Testing used an R9 390X, GTX 980 Ti, and i7-5930K; we hope to add low-end CPUs to determine the true advantage of low-level APIs, but are waiting until the driver set and software further iterate on Vulkan integration.

Monitors have undergone a revolution over the past few years. 1080p lost its luster as 1440p emerged, and later 4K – which still hasn't quite caught on – and that's to say nothing of the frequency battles. 144Hz has subsumed 120Hz, both sort-of “premium” frequencies, and adaptive synchronization technologies G-Sync and FreeSync have further complicated the monitor buying argument.

But we think the most interesting, recent trend is to do with aspect ratios. Modern displays don't really get called “widescreen” anymore; there's no point – they're all wide. Today, we've got “UltraWides,” much like we've got “Ultra HD” – and whatever else has the U-word thrown in front of it – and they're no gimmick. UltraWide displays run a 21:9 aspect ratio (21 pixels across for every 9 pixels down, think of it like run/rise), a noticeable difference from the 16:9 of normal widescreens. These UltraWide displays afford greater production capabilities by effectively serving the role of two side-by-side displays, just with no bezel; they also offer greater desk compatibility, more easily centered atop smaller real-estate.

For gaming, the UltraWide argument is two-fold: Greater immersion with a wider, more “full” space, and greater peripheral vision in games which may benefit from a wider field of view. Increased pixel throughput more heavily saturates the pipeline, of course, which means that standard 1080p and 1440p benchmarks won't reflect the true video card requirements of a 3440x1440 UltraWide display. Today, we're benchmarking graphics card FPS on a 3440x1440 Acer Predator 34” UltraWide monitor. The UltraWide GPU performance benchmark includes the GTX 980 Ti, 980, 970, and 960 from nVidia and the R9 390X, 380X, 290X, and 285 from AMD.

We've got to give it to marketing – “Wraith” is a good name; certainly better than “Banshee,” which is what the previous AMD cooler should have been named for its shrill wailing. The Wraith cooler substantially improves the noise-to-thermals ratio for AMD's stock units, and is a cooler we hope to see shipping with future Zen products.

At its max 2900 RPM, the Wraith produces thermals that are effectively identical to what the old cooler accomplishes at ~5500 RPM (see below chart). Running the old cooler at a comparable 2900 RPM results in a delta of ~14.3% warmer than the Wraith. This is all noted in our thermal review of the Wraith. What we didn't note, however, was the dBA / noise output. In this video, we compare the noise levels of AMD's two stock coolers for the FX-8370 CPU -- the Wraith and the 'old' unit.

The GTX 980's entry into laptops – without suffixed “M” demarcation – provided a look at the world of true desktop graphics as integrated on mobile devices. We reviewed MSI's GT72S Dominator Pro G ($2760) with its GTX 980, conducting additional overclocking tests to determine just how far the desktop part could be pushed when crammed into a laptop.

Turns out, it was pretty far. And we're revisiting the subject with Intel's new i7-6820HK and the GTX 970M. This benchmark looks at just how far a laptop CPU and GPU can be overclocked, then runs game FPS and Adobe tests to determine if OCing is worth it. We use The Witcher 3, DiRT, GTA V, Shadow of Mordor, and Metro for FPS tests, then run trace and automated testing for Photoshop and video editing software. A CyberPower Fangbook 4 SX7-300 was used for the benchmark, which is outfitted with the 6820HK unlocked CPU.

Beginner PC Building Tips & Common Mistakes

By Published January 24, 2016 at 2:43 pm

It's been snowing here lately, which means that the entire state has shut down from its 1” of cumulative death-powder. While waiting for one of the thermal benches to warm-up, we figured a quick, informal discussion on basic PC building would be a worthy snow-day topic.

GN test technician Mike Gaglione handles most of our system assembly and case testing, making him an ideal candidate to speak to out-of-mind system install tips and common beginner oversights. We talk about motherboard standoffs, memory slotting, PCI-e slot assignment for multi-GPU setups, cable management tips, and more.

Our last head-to-head GPU comparison benchmarked the performance of a single GTX 980 Ti versus two GTX 970s in SLI. Following some astute reader suggestions, we've acquired a PowerColor Devil 13 dual-core R9 390 – two GPUs on one card – to test as a CrossFire stand-in against SLI GTX 970s. Performance analysis is accompanied by power draw and thermal tests, though a proper, full review on the Devil 13 card will follow this content in short order.

For today, the focus is on this head-to-head comparison. FPS benchmarks look at performance of 2x CrossFire R9 390s vs. 2x SLI GTX 970s, including supporting data from a GTX 980 Ti, 980, and R9 390X. We'll also work toward answering the question of whether CrossFire and SLI are worth it in this particular scenario, as opposed to investing in a single, more expensive GPU.

Why We Won't Be Day-One VR Adopters

By Published January 18, 2016 at 1:30 pm

Ivan Sutherland's “Sword of Domacles” head-mounted display lurched above its user as a spider above its prey; the contraption, as most technology of its era, was room-sized. The Sword of Domacles wasn't meant to be a user-accessible VR solution. It produced primitive wireframes of a room's interior and was strictly observational, demonstrated in awkward photos with the wearer's hands neatly clasped behind his back. This was Ground Zero for VR.

Sutherland later joined David Evans to build the University of Utah's Computer Science and Computer Graphics divisions, responsible for students who'd later create the world's first computer-animated 3D graphics. Through Sutherland and Evans – and their students – the foundation for Adobe, Pixar, and Silicon Graphics (SGI) was set, later producing companies like the modern nVidia. All this history of VR is recapped more thoroughly in our “History of Virtual Reality” article.

Oculus VR and Valve are makers of the modern-day HMD incarnates. Billions of dollars are backing these new ventures and, for the first time in history, viable VR solutions don't cost tens to hundreds of thousands of dollars. They're also not military-owned, another common theme of previous virtual reality attempts.

Our team has spent a considerable amount of time in virtual reality demos. The technology is an impressive fusion of display advancements, frametime pacing optimization, input latency management, and IR scanning. Just the display tech alone is nearly unrivaled, the Rift packing 2160x1200 pixels into a space smaller than a phone screen. Screen Door Effect issues have been largely resolved or circumvented on each of the major two VR solutions, and timewarp has been navigated with clever GPU processing techniques by both AMD and nVidia. Everything's lining-up to be a serious push into virtual reality and, this time, there's enough money behind the concept that it's not another “3D glasses” fad. Probably, anyway.

But I don't think VR is ready for day-one adoption by the general gaming audience. Impressive – yes; here to stay – yes. But not ready for gamers. The Vive and Rift both experience similar versions of the same problems: Hardware requirements and prices that rival more affordable displays, logistical and use case limitations, and the industry's myopic understanding of game design.

HTC's Vive and Oculus VR's Rift are the two big players that we're focusing on today.

CES serves as a means to introduce some of the year's biggest product announcements. At last week's show, we saw new GPU architectures, virtual reality 'jetpacks,' Star Wars Destroyer case mods, and a dozen or more cases. Although by no means a definitive listing of all the year's cases, CES 2016 offers a look at what to expect for the annual computer hardware and technology trends and announcements. In the world of cases, it seems that's the trend of power supply shrouds.

This round-up lists the best gaming cases of 2016, including products from NZXT, Corsair, In-Win, Thermaltake, Phanteks, EVGA, and SilverStone. We look at the top PC cases from $50 to $400+, all shown at CES 2016, to best span all major budget ranges for PC builds.

Scalable multi-card configurations from both nVidia and AMD have improved in their performance over the years, with both companies investing additional resources to driver optimizations for multi-card users. The value of SLI or CrossFire has always been debatable, particularly for day-one system builders (rather than someone upgrading), but is worth investigating further. With all the year's newest titles – and some mainstays with well-tested performance – we did that investigation, specifically comparing a single 980 Ti vs. 2x 970s in SLI, a 980, single 970, and R9 390X for AMD baseline.

Today's GTX 970 SLI vs. single 980 Ti test benchmarks average FPS and 1% / 0.1% low performance, presenting data in a few different chart types: Usual AVG, 1% low, & 0.1% low head-to-head performance; delta value (percent advantage) between the 970s in SLI and 980 Ti; delta value (percent gain) between the 2x 970s and a single GTX 970.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge