Hardware Guides

Part 1 of our interview with AMD's RTG SVP & Chief Architect went live earlier this week, where Raja Koduri talked about shader intrinsic functions that eliminate abstraction layers between hardware and software. In this second and final part of our discussion, we continue on the subject of hardware advancements and limitations of Moore's law, the burden on software to optimize performance to meet hardware capabilities, and GPUOpen.

The conversation started with GPUOpen and new, low-level APIs – DirectX 12 and Vulkan, mainly – which were a key point of discussion during our recent Battlefield 1 benchmark. Koduri emphasized that these low-overhead APIs kick-started an internal effort to open the black box that is the GPU, and begin the process of removing “black magic” (read: abstraction layers) from the game-to-GPU pipeline. The effort was spearheaded by Mantle, now subsumed by Vulkan, and has continued through GPUOpen.

We've already extensively looked at the GTX 1060 3GB vs. GTX 1060 6GB buying options, we covered the RX 480 4GB vs. 8GB options, but we haven't yet tested the 3GB & 4GB SKUs head-to-head. In this content, we're using the latest drivers to specifically benchmark the GTX 1060 3GB versus the RX 480 4GB cards to determine which has the best framerate for the price.

Each of the lower VRAM spec SKUs has a few other tweaks in addition to its memory capacity reduction. The GTX 1060 3GB, for instance, also eliminates one of its SMs. In turn, that kills 128 CUDA cores and 8 TMUs, dragging the 1060 down from 1280 cores / 80 TMUs to 1152 cores / 72 TMUs on the GTX 1060 3GB model. AMD's RX 480 4GB card, meanwhile, has a lower minimum specification for memory to assist in cost management. The RX 480 4GB has a minimum memory speed of ~1750MHz (or ~7Gbps effective), whereas the RX 480 8GB model runs 2000MHz (8Gbps effective).

NZXT's new Kraken X42, X52, and X62 liquid coolers were announced today, all using the new Asetek Gen5 pump with substantial custom modifications. The most direct Gen5 competition would be from Corsair, makers of the H115i and H100iV2, each priced to compete with the Kraken X42 ($130) and X52. The Corsair units, however, are using an unmodified Asetek platform from top-to-bottom, aside from a couple of Corsair fans. NZXT's newest endeavor had its components dictated by NZXT, including a custom (and fairly complex) PCB for fan speed, pump speed, and RGB control, planted under a custom pump plate with infinity mirror finish. The unit has gone so far as to demand a double-elbow barb for pose-able tubes, rather than the out-the-top setup of the Asetek stock platform – that's some fastidious design.

As for how we know all of this, it's because we've already disassembled a unit. We decided to dismantle one of our test-complete models to learn about its internals, since we're still waiting for the X52 and X62 models to be review-ready. We've got a few more tests to run.

Before getting to the tear-down, let's run through the specs, price, and availability of NZXT's new Kraken X42, X52, and X62 closed-loop liquid coolers. 

Abstraction layers that sit between the game code and hardware create transactional overhead that worsens software performance on CPUs and GPUs. This has been a major discussion point as DirectX 12 and Vulkan have rolled-out to the market, particularly with DOOM's successful implementation. Long-standing API incumbent Dx 11 sits unmoving between the game engine and the hardware, preventing developers from leveraging specific system resources to efficiently execute game functions or rendering.

Contrary to this, it is possible, for example, to optimize tessellation performance by making explicit changes in how its execution is handled on Pascal, Polaris, Maxwell, or Hawaii architectures. A developer could accelerate performance by directly commanding the GPU to execute code on a reserved set of compute units, or could leverage asynchronous shaders to process render tasks without getting “stuck” behind other instructions in the pipeline. This can't be done with higher level APIs like Dx 11, but DirectX 12 and Vulkan both allow this lower-level hardware access; you may have seen this referred to as “direct to metal,” or “programming to the metal.” These phrases reference that explicit hardware access, and have historically been used to describe what Xbox and Playstation consoles enable for developers. It wasn't until recently that this level of support came to PC.

In our recent return trip to California (see also: Corsair validation lab tour), we visited AMD's offices to discuss shader intrinsic functions and performance acceleration on GPUs by leveraging low-level APIs.

We toured Corsair's new offices about a year ago, where we briefly posted about some of the validation facilities and the then-new logo. Now, with the offices fully populated, we're revisiting to talk wind tunnels, thermal chambers, and test vehicles for CPU coolers and fans. Corsair Thermal Engineer Bobby Kinstle walks us through the test processes for determining on-box specs, explaining hundreds of thousands of dollars worth of validation equipment along the way.

This relates to some of our previous content, where we got time with a local thermal chamber to validate our own methodology. You might also be interested to learn about when and why we use delta values for cooler efficacy measurements, and why we sometimes go with straight diode temperatures (like thermal limits on GPUs).

Video here (posting remotely -- can't embed): https://www.youtube.com/watch?v=Mf1uI2-I05o

We've still got a few content pieces left over from our recent tour of LA-based hardware manufacturers. One of those pieces, filmed with no notice and sort of on a whim, is our tear-down of an EVGA GTX 1080 Classified video card. EVGA's Jacob Freeman had one available and was game to watch a live, no-preparation tear-down of the card on camera.

This is the most meticulously built GTX 1080 we have yet torn to the bones. The card has an intensely over-built VRM with inductors and power stages of high-quality, using doublers to achieve its 14-phase power design (7x2). An additional three phases are set aside for memory, cooled in tandem with the core VRM, GPU, and VRAM by an ACX 3.0 cooler. The PCB and cooler meet through a set of screws, each anchored to an adhesive (preventing direct contact between the screw and PCB – although unnecessary, a nice touch), with the faceplate and accessories mounted via Allen-keyed screws.

It's an exceptionally easy card to disassemble. The unit is rated to draw 245W through the board (30W more than the 215W draw of the GTX 1080 Hybrid), theoretically targeted at high sustained overclocks with its master/slave power target boost. It's not news that Pascal devices seem to cap their maximum frequency all around the 2050-2100MHz range, but there are still merits to an over-built VRM. One of those is greater spread of heat over the area of the cooler, and lower efficiency loss through heat or low-quality phases. With the Classified, it's also a prime target for modification using something like the EK Predator 280 or open loop cooling. Easy disassembly and high performance match well with liquid.

Buildzoid returns this week to analyze the PCB and VRM of Gigabyte's GTX 1080 Xtreme Water Force GPU, providing new insight to the card's overclocking capabilities. We showed a maximum overclock of 2151.5MHz on the Gigabyte GTX 1080 Xtreme Water Force, but the card's stable OC landed it at just 2100.5MHz. Compared to the FTW Hybrid (2151.5MHz overclock sustained) and MSI Sea Hawk 1080 (2050MHz overclock sustained), the Gigabyte Xtreme Water Force's overkill VRM & cooling land it between the two competitors.

But we talk about all of that in the review; today, we're focused on the PCB and VRM exclusively.

The card uses a 12-phase core voltage VRM with a 2-phase memory voltage VRM, relying on Fairchild Semiconductor and uPI Micro for most the other components. Learn more here:

Best 1440p Gaming Monitors of 2016 Round-Up

By Published October 02, 2016 at 8:30 am

For years, the de facto standard for PC gaming and consoles was 1920x1080 – even if consoles occasionally struggled to reach it. 1080p monitors have been the only practical choice for gaming for years now, but viability of 1440p-ready hardware for mid-range gaming PCs means that the market for 1440p monitors has become more competitive. Similarly, the 4K monitor market is also getting fairly competitive, but unfortunately mid-range (and even higher-end) GPUs still struggle to run at 4K in many modern games.

While 4K becomes more attainable for the average consumers, 2560x1440 monitors fit the needs of many gamers who want higher resolution than 1080p while still desiring to render – and show – 120+ FPS. With this in mind, we’ve created this buyer’s guide for the best 1440p gaming monitors presently on the market, particularly when accounting for price, high refresh rate, or panel type. Since the primary use case for the monitors in this guide is gaming, we have primarily included G-Sync (covered here) and FreeSync (covered here and here) compatible monitors for users with nVidia and AMD GPUs, respectively.

Liquid cooling has become infinitely more accessible with plug-and-play AIO solutions, but those lack some of the efficacy and all of the aesthetics. Open loop liquid cooling is alive and well in the enthusiast market; it's a niche of a niche, and one that's satisfied by few manufacturers. We had a chance to stop over at Thermaltake's offices while making the City of Industry circuit last week, and used some of that time to film a brief tutorial on hard tube bending.

It felt like filming a cooking show, at times. The format was similar, but it worked well for this process. Open loop liquid cooling is done with either soft tubing or hard tubing, the latter of which must be heated (with a heat gun) to make necessary bends within the system. Soft tubing is more easily manipulated and is as “plug and play” as it gets with an open loop, though “plug and play” isn't really desirable with open loops. Once you're this deep in cooling, best to go all the way.

PETG hard tubing is more leak resistant by nature of the mounting. Hard tubes are less likely to slip off of their mounting barbs with age or transport (fluid between the tube and its mounting point can lubricate the tube, causing a slip and slow leakage). The downside, as with the rest of open loop cooling, is entirely the time requirement and cost increase. Granted, compared to the rest of the loop, hard tubing cost can start to feel negligible.

We might soon be building a wet bench for open loop liquid cooling, as we're starting to receive GPUs with water blocks for testing. Today, we've got a brief hard tube bending tutorial with Thermaltake's Thermal Mike to lead us into our future open loop content. Take a look at that below:

Gigabyte GTX 1080 Xtreme Water Force Tear-Down

By Published September 20, 2016 at 9:00 am

As we board planes for our impending trip to Southern California (office tours upcoming), we've just finalized the Gigabyte GTX 1080 Xtreme Water Force tear-down coverage. The Gigabyte GTX 1080 Xtreme Water Force makes use of a similar cooling philosophy as the EVGA GTX 1080 FTW Hybrid, which we recently tore-down and reviewed vs. the Corsair Hydro GFX.

Gigabyte's using a closed-loop liquid cooler to deal with the heat generation on the GP104-400 GPU, but isn't taking the “hybrid” approach that its competitors have taken. There's no VRM/VRAM blower fan for this unit; instead, the power and memory components are cooled by an additional copper and aluminum heatsink, which are bridged by a heatpipe. That copper plate (mounted atop the VRAM) transfers its heat to the coldplate of what we believe to be a Cooler Master CLC, which then sinks everything for dissipation by the 120mm radiator.

Page 1 of 12

  VigLink badge