AMD was first-to-market with Doom-ready drivers, but exhibited exceptionally poor performance with a few of its cards. The R9 390X was one of those, being outperformed massively (~40%) by the GTX 970, and nearly matched by the GTX 960 at 1080p. If it's not apparent by the price difference between the two, that's unacceptable; the hardware of the R9 390X should effortlessly outperform the GTX 960, a budget-class card, and it just wasn't happening. Shortly after the game launched and AMD posted its initial driver set (16.5.2), a hotfix (16.5.2.1) was released to resolve performance issues on the R9 390 series cards.

We had a moment to re-benchmark DOOM using the latest drivers between our GTX 1080 Hybrid experiment and current travel to Asia. The good news: AMD's R9 390X has improved performance substantially – about 26% in some tests – and seem to be doing better. Other cards were unaffected by this hot fix (though we did test), so don't expect a performance gain out of your 380X, Fury X, or similar non-390-series device.

Note: These charts now include the GTX 1080 and its overclocked performance.

GN's embarking on its most ambitious trip yet: Taipei, then Shenzhen, China and neighboring countries, then back to Taipei. There are many reasons we're doing the Asia tour, but it's all rooted in one of the world's largest consumer electronics shows. Computex rivals CES in size, though arguably has a bigger desktop hardware / component presence than CES (hosted annually in Las Vegas). This year, we're attending – should be a good show.

Here's a quick recap of what PC hardware to expect at Computex 2016.

All the pyrotechnics in the world couldn't match the gasconade with which GPU & CPU vendors announce their new architectures. You'd halfway expect this promulgation of multipliers and gains and reductions (but only where smaller is better) to mark the end-times for humankind; surely, if some device were crafted to the standards by which it were announced, The Aliens would descend upon us.

But, every now and then, those bombastic announcements have something behind them – there's substance there, and potential for an adequately exciting piece of technology. NVidia's debut of consumer-grade Pascal architecture initializes with GP104, the first of its non-Accelerator cards to host the new 16nm FinFET process node from TSMC. That GPU lands on the GTX 1080 Founders Edition video card first, later to be disseminated through AIB partners with custom cooling or PCB solutions. If the Founders Edition nomenclature confuses you, don't let it – it's a replacement for nVidia's old “Reference” card naming, as we described here.

Anticipation is high for GP104's improvements over Maxwell, particularly in the area of asynchronous compute and command queuing. As the industry pushes ever into DirectX 12 and Vulkan, compute preemption and dynamic task management become the gatekeepers to performance advancements in these new APIs. It also means that LDA & AFR start getting pushed out as frames become more interdependent with post-FX, and so suddenly there are implications for multi-card configurations that point toward increasingly less optimization support going forward.

Our nVidia GeForce GTX 1080 Founders Edition review benchmarks the card's FPS performance, thermals, noise levels, and overclocking vs. the 980 Ti, 980, Fury X, and 390X. This nearing-10,000-word review lays-out the architecture from an SM level, talks asynchronous compute changes in Pascal / GTX 1080, provides a quick “how to” primer for overclocking the GTX 1080, and talks simultaneous multi-projection. We've got thermal throttle analysis that's new, too, and we're excited to show it.

The Founders Edition version of the GTX 1080 costs $700, though MSRP for AIBs starts at $600. We expect to see that market fill-in over the next few months. Public availability begins on May 27.

First, the embedded video review and specs table:

Following our GTX 1080 coverage of DOOM – and preempting the eventual review – we spent the time to execute GPU benchmarks of id Software's DOOM. The new FPS boasts high-fidelity visuals and fast-paced, Quake-era gameplay mechanics. Histrionic explosions dot Doom's hellscape, overblown only by its omnipresent red tint and magma flows. The game is heavy on particle effects and post-processing, performing much of its crunching toward the back of the GPU pipeline (after geometry and rasterization).

Geometry isn't particularly complex, with the game's indoor settings comprised almost entirely of labyrinthine corridors and rooms. Framerate fluctuates heavily; the more lighting effects and particle simulation in the camera frustum, the greater the swings in FPS as players emerge into or depart from lava-filled chambers and other areas of post-FX interest.

In this Doom graphics card benchmark, we test the framerate (FPS) of various GPUs in the new Doom “4” game, including the GTX 980 Ti, 980, 970, Fury X, 390X, 380X, and more. We'll briefly define game graphics settings first; game graphics definitions include brief discussion on TSSAA, directional occlusion quality, shadows, and more.

Note: Doom will soon add support for Vulkan. It's not here yet, but we've been told to expect Vulkan support within a few weeks of launch. All current tests were executed with OpenGL. We will revisit for Vulkan once the API is enabled.

We spoke exclusively with the Creative Assembly team about its game engine optimization for the upcoming Total War: Warhammer. Major moves to optimize and refactor the game engine include DirectX 12 integration, better CPU thread management (decoupling the logic and render threads), and GPU-assigned processing to lighten the CPU load.

The interview with Al Bickham, Studio Communications Manager at Creative Assembly, can be found in its entirety below. We hope to soon visit the topic of DirectX 12 support within the Total War: Warhammer engine.

PAX East 2016 has a strong hardware presence, and the number of zero-hour announcements backs that up. MSI, Corsair, AMD (a first-time exhibitor at East), nVidia, Intel, Cooler Master, Kingston, and a handful of other hardware vendors have all made an appearance at this year's show, ever flanked by gaming giants.

Today's initial news coverage focuses on the MSI Aegis desktop computer, Corsair's updated K70 & K65 keyboards, and the AMD Wraith cooler's arrival to lower-end SKUs. Find out more in the video below:

The AMD Athlon X4 880K is the CPU we've been waiting for. Since the A10-7870K and A10-7860K APU reviews, our conclusions have generally been pointing in this direction. For the dGPU-using gaming audience, it makes better sense for budget buyers to grab a cheap CPU and dGPU than to buy an APU alone. There is a place for the APUs – ultra-budget, tiny, quiet HTPCs capable of video streaming and moderate gaming – but for more “core” gaming, the CPU + dGPU move currently does yield major gains. Even just throwing a 250X at an APU has, in some of our tests, nearly doubled gaming performance. For such a dirt-cheap video card, that's a big gain to be had.

And so AMD's Athlon X4 880K enters the scene. The price is all over the map right now. MSRP is $95 from AMD, but the X4 880K isn't (as of this writing) available through major first-party retailers like Amazon and Newegg. We've seen it for $104 from third-party Newegg sellers, but as low as $90 from sites we've never heard of, if you count those. In theory, though, the X4 880K will eventually come to rest at $95.

The new CPU is effectively a step between the 7870K and 7890K, but with the IGP disabled. This lowers validation cost while offering effectively equivalent CPU performance. AMD's X4 880K operates on a two-module, four-core Excavator architecture with a stock clock-rate of 4.0 to 4.2GHz (boosted). For comparison, the A10-7890K runs 4.1 to 4.3GHz, so there's a 100MHz gain over the X4 880K. Easily negated with overclocking, as the 880K is a K-SKU, multiplier-unlocked chip. The 880K has a 95W TDP and is paired with AMD's 125W near-silent (NS) cooler.

This review and benchmark of the Athlon X4 880K tests thermals, gaming (FPS) performance, and compares against higher-end i3 & i5 CPUs, APUs, and the old X4 760K.

This fifteenth episode of Ask GN springs forth a few quick-hitter questions, but a couple that require greater depth than was addressable in our episodic format. These longer questions will be explored in more depth in future content pieces.

For today, we're looking at the future of AMD's Zen for the company, forecasting HDR and monitor tech, discussing IGP and CPU performance gains, and talking thermals in laptops. As always, one bonus question at the end.

Timestamps are below the embedded video.

We’re covering the Graphics Technology Conference in San Jose this week – a show overflowing with low-level information on graphics silicon and VR – and so have themed our Ask GN episode 14 around silicon products.

This week’s episode talks CPU thread assignment & simultaneous multi-threading, VR-ready IGPs, the future of the IGP & CPU, and Dx12 topics. We also briefly talk Linux gaming, but that requires a lengthier, future video for proper depth.

If you’ve got questions for next week’s episode, as always, leave them below or on the video comments section (which is where we check first).

Stutter as a result of V-Sync (which was made to fix screen tearing -- another problem) has been a consistent nuisance in PC gaming since its inception. We’ve talked about how screen-tearing and stutter interact here.

Despite the fact that FPS in games can fluctuate dramatically, monitors have been stuck using a fixed refresh rate. Then nVidia’s G-Sync cropped-up. G-Sync was the first way to eliminate both stutter and screen-tearing on desktop PCs by controlling FPS-refresh fluctuations. Quickly after nVidia showed off G-Sync, AMD released their competing technology: FreeSync. G-Sync and FreeSync are the only adaptive refresh rate technologies currently available to consumers on large.

Page 1 of 17

  VigLink badge