Khronos Group today released the Vulkan 1.1 and SPIR-V 1.3 updates. Adoption of both Vulkan and DX12 has been limited, so the overall purpose of this update is described as “Building Vulkan’s Future.”

Part 1 of our interview with AMD's RTG SVP & Chief Architect went live earlier this week, where Raja Koduri talked about shader intrinsic functions that eliminate abstraction layers between hardware and software. In this second and final part of our discussion, we continue on the subject of hardware advancements and limitations of Moore's law, the burden on software to optimize performance to meet hardware capabilities, and GPUOpen.

The conversation started with GPUOpen and new, low-level APIs – DirectX 12 and Vulkan, mainly – which were a key point of discussion during our recent Battlefield 1 benchmark. Koduri emphasized that these low-overhead APIs kick-started an internal effort to open the black box that is the GPU, and begin the process of removing “black magic” (read: abstraction layers) from the game-to-GPU pipeline. The effort was spearheaded by Mantle, now subsumed by Vulkan, and has continued through GPUOpen.

Benchmarking in Vulkan or Dx12 is still a bit of a pain in the NAS, but PresentMon makes it possible to conduct accurate FPS and frametime tests without reliance upon FRAPS. July 11 marks DOOM's introduction of the Vulkan API in addition to its existing OpenGL 4.3/4.5 programming interfaces. Between the nVidia and AMD press events the last few months, we've seen id Software surface a few times to talk big about their Vulkan integration – but it's taken a while to finalize.

As we're in the midst of GTX 1060 benchmarking and other ongoing hardware reviews, this article is being kept short. Our test passes look only at the RX 480, GTX 1080, and GTX 970, so we're strictly looking at scalability on the new Polaris and Pascal architectures. The GTX 970 was thrown-in to see if there are noteworthy improvements for Vulkan when moving from Maxwell to Pascal.

This test is not meant to show if one video card is “better” than another (as our original Doom benchmark did), but is instead meant to show OpenGL → Vulkan scaling within a single card and architecture. Note that, as with any game, Doom is indicative only of performance and scaling within Doom. The results in other Vulkan games, like the Talos Principle, will not necessarily mirror these. The new APIs are complex enough that developers must carefully implement them (Vulkan or Dx12) to best exploit the low-level access. We spoke about this with Chris Roberts a while back, who offered up this relevant quote:

The only widespread implementation of Vulkan that presently exists is The Talos Principle, which offers both the Vulkan and DirectX 11 APIs. We've mostly seen negative scaling in the Talos Principle when switching to Vulkan, but id Software's DOOM promises gains in framerate by switching from OpenGL (4.3 & 4.5) to Vulkan.

All the pyrotechnics in the world couldn't match the gasconade with which GPU & CPU vendors announce their new architectures. You'd halfway expect this promulgation of multipliers and gains and reductions (but only where smaller is better) to mark the end-times for humankind; surely, if some device were crafted to the standards by which it were announced, The Aliens would descend upon us.

But, every now and then, those bombastic announcements have something behind them – there's substance there, and potential for an adequately exciting piece of technology. NVidia's debut of consumer-grade Pascal architecture initializes with GP104, the first of its non-Accelerator cards to host the new 16nm FinFET process node from TSMC. That GPU lands on the GTX 1080 Founders Edition video card first, later to be disseminated through AIB partners with custom cooling or PCB solutions. If the Founders Edition nomenclature confuses you, don't let it – it's a replacement for nVidia's old “Reference” card naming, as we described here.

Anticipation is high for GP104's improvements over Maxwell, particularly in the area of asynchronous compute and command queuing. As the industry pushes ever into DirectX 12 and Vulkan, compute preemption and dynamic task management become the gatekeepers to performance advancements in these new APIs. It also means that LDA & AFR start getting pushed out as frames become more interdependent with post-FX, and so suddenly there are implications for multi-card configurations that point toward increasingly less optimization support going forward.

Our nVidia GeForce GTX 1080 Founders Edition review benchmarks the card's FPS performance, thermals, noise levels, and overclocking vs. the 980 Ti, 980, Fury X, and 390X. This nearing-10,000-word review lays-out the architecture from an SM level, talks asynchronous compute changes in Pascal / GTX 1080, provides a quick “how to” primer for overclocking the GTX 1080, and talks simultaneous multi-projection. We've got thermal throttle analysis that's new, too, and we're excited to show it.

The Founders Edition version of the GTX 1080 costs $700, though MSRP for AIBs starts at $600. We expect to see that market fill-in over the next few months. Public availability begins on May 27.

First, the embedded video review and specs table:

Video card drivers are almost as important as the hardware with which they interface; without stable and ongoing driver support, a GPU can't be fully utilized to a level that exercises its strengths in the field. AMD has long battled to improve perception of its drivers – a fight we endorsed upon the release of Catalyst successor Radeon Settings – and has continued that battle at GDC 2016.

“For a long time, people keep saying, 'well, AMD has great hardware – what about our drivers?'” AMD Corporate VP Roy Taylor told us in an interview, “I don't want to hear that anymore, all right?” The response was given in our interview following AMD's Capsaicin event, which featured industry luminaries in game development and VR.

It's been a few months since our “Ask GN” series had its last installment. We got eleven episodes deep, then proceeded to plunge into the non-stop game testing and benchmarking of the fourth quarter. Alas, following fan requests and interest, we've proudly resurrected the series – not the only thing resurrected this week, either.

So, amidst Games for Windows Live and RollerCoaster Tycoon's re-re-announcement of mod support, we figured we'd brighten the week with something more promising: DirectX & Vulkan cherry-picked topics, classic GPU battles, and power supply testing questions. There's a bonus question at the end, too.

Here's the thing: That comment in the headline has always been shortsighted, and will always be shortsighted. We've seen it a few times lately – the majority of comments on our DirectX 12 explicit multi-GPU and Vulkan benchmarks have been positive – but a stand-out few have explicitly scolded our efforts for testing new APIs on games which are known to be incomplete. Our articles and videos contain massive sections that fully detail just how far along any new tech is, disclaiming the possibility that – like with Vulkan – it may be under-performing due to an early build state.

But that's not a reason to leave something untested, and to think as such is a mix of denial and naivety.

Here's an example comment: “None of these tests matter right now as Vulcan is not fully optimized.” [sic]

These comments are rooted in denial that's resultant of marketing build-up for the new APIs. Anything short of game-changing is seen as an indication that it is “too early” to test, and disregarded for being unimportant. But it's not too early to test; these early adopter games are living pieces of software, and they constantly change – that makes them perfect to build test data for a new API.

Ashes of Singularity has become the poster-child for early DirectX 12 benchmarking, if only because it was the first-to-market with ground-up DirectX 12 and DirectX 11 support. Just minutes ago, the game officially updated its early build to include its DirectX 12 Benchmark Version 2, making critical changes that include cross-brand multi-GPU support. The benchmark also made updates to improve reliability and reproduction of results, primarily by giving all units 'god mode,' so inconsistent deaths don't impact workload.

For this benchmark, we tested explicit multi-GPU functionality by using AMD and nVidia cards at the same time, something we're calling “SLIFire” for ease. The benchmark specifically uses MSI R9 390X Gaming 8G and MSI GTX 970 Gaming 4G cards vs. 2x GTX 970s, 1x GTX 970, and 1x R9 390X for baseline comparisons.

NVidia and AMD had a bit of a back-and-forth with day-one Vulkan announcements, with nVidia taking a few shots at AMD's beta driver launch. “OpenGL Next” became Vulkan, which consumed parts of AMD's Mantle API in its move toward accommodating developers with lower-level access to hardware. The phrase “closer to the metal” applies to Mantle, Vulkan, and DirectX 12 in similar capacities; these APIs bypass overhead created by DirectX 11 and more directly tune for GPU hardware, off-loading parallelized tasks from the CPU and to the GPU. In a previous interview with Star Citizen's Chris Roberts, we talked about some of the developer side of Vulkan & DirectX 12 programming, learning that it's not as easy as just 'throwing an API call' switch.

For this benchmark, we ran Vulkan vs. DirectX 11 (D3D11) benchmarks in the Talos Principle to determine which API is presently 'best.' There's a giant disclaimer here, though, and we've dedicated an entire section of the article to that (see: “Read This First!”). Testing used an R9 390X, GTX 980 Ti, and i7-5930K; we hope to add low-end CPUs to determine the true advantage of low-level APIs, but are waiting until the driver set and software further iterate on Vulkan integration.

Page 1 of 2

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge