Khronos Group today released the Vulkan 1.1 and SPIR-V 1.3 updates. Adoption of both Vulkan and DX12 has been limited, so the overall purpose of this update is described as “Building Vulkan’s Future.”

Episode 33 of Ask GN answers a few more questions than we normally take, resulting in a 20-minute run-time for the episode. We talk about hyperthreading and its impact in some specific games, DirectX 12 performance, scaling of Dx11 and Dx12 performance on AMD and nVidia GPUs, Furmark, and a few more topics. One of those other topics was on recording software's impact on framerate, like Shadowplay, OBS, FRAPS, or GVR.

You can find the embedded Q&A video below. We've been running these for a bit more than a year now, making Ask GN our longest-running series by a longshot. As for content this week, because we often take Ask GN articles as an opportunity to discuss what's coming, keep an eye out for more EVGA VRM updates, BF1's performance with various RAM speeds, and a couple smaller items.

Part 1 of our interview with AMD's RTG SVP & Chief Architect went live earlier this week, where Raja Koduri talked about shader intrinsic functions that eliminate abstraction layers between hardware and software. In this second and final part of our discussion, we continue on the subject of hardware advancements and limitations of Moore's law, the burden on software to optimize performance to meet hardware capabilities, and GPUOpen.

The conversation started with GPUOpen and new, low-level APIs – DirectX 12 and Vulkan, mainly – which were a key point of discussion during our recent Battlefield 1 benchmark. Koduri emphasized that these low-overhead APIs kick-started an internal effort to open the black box that is the GPU, and begin the process of removing “black magic” (read: abstraction layers) from the game-to-GPU pipeline. The effort was spearheaded by Mantle, now subsumed by Vulkan, and has continued through GPUOpen.

All the pyrotechnics in the world couldn't match the gasconade with which GPU & CPU vendors announce their new architectures. You'd halfway expect this promulgation of multipliers and gains and reductions (but only where smaller is better) to mark the end-times for humankind; surely, if some device were crafted to the standards by which it were announced, The Aliens would descend upon us.

But, every now and then, those bombastic announcements have something behind them – there's substance there, and potential for an adequately exciting piece of technology. NVidia's debut of consumer-grade Pascal architecture initializes with GP104, the first of its non-Accelerator cards to host the new 16nm FinFET process node from TSMC. That GPU lands on the GTX 1080 Founders Edition video card first, later to be disseminated through AIB partners with custom cooling or PCB solutions. If the Founders Edition nomenclature confuses you, don't let it – it's a replacement for nVidia's old “Reference” card naming, as we described here.

Anticipation is high for GP104's improvements over Maxwell, particularly in the area of asynchronous compute and command queuing. As the industry pushes ever into DirectX 12 and Vulkan, compute preemption and dynamic task management become the gatekeepers to performance advancements in these new APIs. It also means that LDA & AFR start getting pushed out as frames become more interdependent with post-FX, and so suddenly there are implications for multi-card configurations that point toward increasingly less optimization support going forward.

Our nVidia GeForce GTX 1080 Founders Edition review benchmarks the card's FPS performance, thermals, noise levels, and overclocking vs. the 980 Ti, 980, Fury X, and 390X. This nearing-10,000-word review lays-out the architecture from an SM level, talks asynchronous compute changes in Pascal / GTX 1080, provides a quick “how to” primer for overclocking the GTX 1080, and talks simultaneous multi-projection. We've got thermal throttle analysis that's new, too, and we're excited to show it.

The Founders Edition version of the GTX 1080 costs $700, though MSRP for AIBs starts at $600. We expect to see that market fill-in over the next few months. Public availability begins on May 27.

First, the embedded video review and specs table:

Steam's hardware survey reports a +1.57% increase month-over-month in Windows 10 64-bit adoption, marking a growth trend favoring the move to DirectX 12. Presently, the major Dx12-ready titles include Rise of the Tomb Raider, Hitman, Ashes of the Singularity, and forthcoming Total War: Warhammer; you can learn about Warhammer's unique game engine technology over here.

In Steam's survey, Windows 7 is broken into just “Windows 7” and “Windows 7 64-bit,” the two totaling 41.43% of the users responding to the optional survey. The survey also breaks Windows 10 into a “64-bit” and an unspecified version, totaling 41.4% (or 40.01% for the specific 64-bit line-item).

Tabulated results are below:

We spoke exclusively with the Creative Assembly team about its game engine optimization for the upcoming Total War: Warhammer. Major moves to optimize and refactor the game engine include DirectX 12 integration, better CPU thread management (decoupling the logic and render threads), and GPU-assigned processing to lighten the CPU load.

The interview with Al Bickham, Studio Communications Manager at Creative Assembly, can be found in its entirety below. We hope to soon visit the topic of DirectX 12 support within the Total War: Warhammer engine.

This week's episode of Ask GN (previous here) delves into reader questions pertaining to initial DirectX 12 performance, Star Citizen CPU thread allocation, “weird” computer issues, and more. Timestamps and video are below.

Star Citizen is an interesting one, and we'd recommend this interview with Chris Roberts as a follow-up for more depth. CPU thread allocation is currently heavy on the third thread/core right now, Star Citizen being in development, but the game hopes to utilize as many threads as it's assigned. Eventually, anyway. CryEngine is technically capable of spawning 8 threads and could utilize hyperthreaded CPUs and 8-core AMD CPUs, if games were built to distribute load in such a fashion. Learn more about that here.

Video card drivers are almost as important as the hardware with which they interface; without stable and ongoing driver support, a GPU can't be fully utilized to a level that exercises its strengths in the field. AMD has long battled to improve perception of its drivers – a fight we endorsed upon the release of Catalyst successor Radeon Settings – and has continued that battle at GDC 2016.

“For a long time, people keep saying, 'well, AMD has great hardware – what about our drivers?'” AMD Corporate VP Roy Taylor told us in an interview, “I don't want to hear that anymore, all right?” The response was given in our interview following AMD's Capsaicin event, which featured industry luminaries in game development and VR.

AMD just announced a partnership with Total War developers Creative Assembly, highlighting the game developer's move to implement DirectX 12 with the upcoming Total War: Warhammer Grand Strategy game.

It's been a few months since our “Ask GN” series had its last installment. We got eleven episodes deep, then proceeded to plunge into the non-stop game testing and benchmarking of the fourth quarter. Alas, following fan requests and interest, we've proudly resurrected the series – not the only thing resurrected this week, either.

So, amidst Games for Windows Live and RollerCoaster Tycoon's re-re-announcement of mod support, we figured we'd brighten the week with something more promising: DirectX & Vulkan cherry-picked topics, classic GPU battles, and power supply testing questions. There's a bonus question at the end, too.

Page 1 of 3

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge