The press drivers we used in our recent Dx12 benchmark have been officially released to the public. AMD's new 16.2 beta drivers include Dx12 optimizations that coincide with the Ashes of Singularity Version 2 Dx12 Benchmark; we strongly recommend these drivers for AMD users hoping to test their systems with Dx12.

AMD's 16.2 Beta drivers make a number of important updates for graphics devices. Performance and “quality improvements” have been made for Rise of the Tomb Raider and CrossFire configurations running The Division or XCOM 2. Additional issue resolutions have been submitted for black screens, “choppy gameplay” and display corruption.

The new drivers are available here.

Full list of changes:

Ashes of Singularity has become the poster-child for early DirectX 12 benchmarking, if only because it was the first-to-market with ground-up DirectX 12 and DirectX 11 support. Just minutes ago, the game officially updated its early build to include its DirectX 12 Benchmark Version 2, making critical changes that include cross-brand multi-GPU support. The benchmark also made updates to improve reliability and reproduction of results, primarily by giving all units 'god mode,' so inconsistent deaths don't impact workload.

For this benchmark, we tested explicit multi-GPU functionality by using AMD and nVidia cards at the same time, something we're calling “SLIFire” for ease. The benchmark specifically uses MSI R9 390X Gaming 8G and MSI GTX 970 Gaming 4G cards vs. 2x GTX 970s, 1x GTX 970, and 1x R9 390X for baseline comparisons.

Last week primarily featured initial Vulkan benchmarks – a stepping stone toward full integration of the new API within games – and major silicon manufacturer news. Intel declared plans to ship 10nm chips by 2H17, nVidia boasted record revenue of $1.4B for its fiscal quarter, and AMD pushed improved Linux drivers to the public. The Intel push is the most interesting, with the company definitively indicating that it will not delay 10nm chip manufacturing past 2017. As the silicon manufacturers near the lower limit of current technology and processes, each of these iterative jaunts toward (what we'd expect to be) something like 1nm carbon nanotubes gets increasingly difficult. Seeing single-digit percentage point increases in overall performance (gaming, production) isn't quite as impressive as the reduction in power and significantly increased transistor count.

Learn about each of these items in more depth here:

For years now, VR has seemed to be right around the corner, but consumer VR is (finally) becoming a reality with the HTC Vive and Oculus Rift soon hitting retailers. Unfortunately, the system requirements for VR – to the woe of my wallet – are fairly demanding.

The Oculus Rift officially recommends an nVidia 970 or AMD 290, an i5-4590, and 8GB+ of RAM. In comparison, the Vive has the same recommended specs with the exception of memory, where the Vive recommends only 4GB.

NVidia and AMD had a bit of a back-and-forth with day-one Vulkan announcements, with nVidia taking a few shots at AMD's beta driver launch. “OpenGL Next” became Vulkan, which consumed parts of AMD's Mantle API in its move toward accommodating developers with lower-level access to hardware. The phrase “closer to the metal” applies to Mantle, Vulkan, and DirectX 12 in similar capacities; these APIs bypass overhead created by DirectX 11 and more directly tune for GPU hardware, off-loading parallelized tasks from the CPU and to the GPU. In a previous interview with Star Citizen's Chris Roberts, we talked about some of the developer side of Vulkan & DirectX 12 programming, learning that it's not as easy as just 'throwing an API call' switch.

For this benchmark, we ran Vulkan vs. DirectX 11 (D3D11) benchmarks in the Talos Principle to determine which API is presently 'best.' There's a giant disclaimer here, though, and we've dedicated an entire section of the article to that (see: “Read This First!”). Testing used an R9 390X, GTX 980 Ti, and i7-5930K; we hope to add low-end CPUs to determine the true advantage of low-level APIs, but are waiting until the driver set and software further iterate on Vulkan integration.

The past week of hardware news is mostly industry-driven, with few noteworthy product announcements outside of a few small items. A few critical news items emerged regarding industry, though, like further Samsung vs. nVidia proceedings, Micron's GDDR5X memory (replacing GDDR5, theoretically), Unity's Steam VR support, AMD/HP FreeSync laptops, and AMD Zen details revealed through CERN – the particle and nuclear research group.

We've rounded up this week's news in the below video. You can find quick, bulleted recaps of each item below the video, if you'd prefer that format.

Learn more below!

This morning's press embargo on the official Vulkan 1.0 API ratification lifted at 9am, when our post and video went live. The major news was AMD's Vulkan beta drivers, which developers were welcomed to download for initial testing of the new low-level API; AMD's Vulkan beta drivers can also be used for the Talos Principle.

In an nVidia announcement one hour after the embargo lifted, the company contacted us about its own Vulkan support – not shy to take a few shots at AMD's hour-prior news release. In its email to us, nVidia made the following between-the-lines statement (emphasis theirs):

The Vulkan API has completely taken over AMD's low-level Mantle application program interface, somewhat of a peer to Microsoft's DirectX 12.

It's a competitive space. Mantle tried to push the industry toward more console-like programming – and we mean that in positive ways – by getting developers “close to the metal.” Low-level APIs that bypass the insurmountable overhead of DirectX 11 are the key to unlocking the full potential of modern hardware; DirectX 12 and Vulkan both get us closer to this, primarily by shifting draw calls off the CPU and reducing bottlenecking. GPUs have grown so powerful in their parallel processing that they can assume significant workload that was once placed upon processors – this benefits gamers in particular, since the majority of our workloads are more easily pushed through the GPU.

AMD just announced a partnership with IO Interactive for inclusion of its forthcoming “Hitman” title in the “Gaming Evolved” program. The involvement boasts “top-flight effects and performance optimizations for PC gamers,” further underscoring a focus on DirectX 12 workload management for increased overall quality.

We've got to give it to marketing – “Wraith” is a good name; certainly better than “Banshee,” which is what the previous AMD cooler should have been named for its shrill wailing. The Wraith cooler substantially improves the noise-to-thermals ratio for AMD's stock units, and is a cooler we hope to see shipping with future Zen products.

At its max 2900 RPM, the Wraith produces thermals that are effectively identical to what the old cooler accomplishes at ~5500 RPM (see below chart). Running the old cooler at a comparable 2900 RPM results in a delta of ~14.3% warmer than the Wraith. This is all noted in our thermal review of the Wraith. What we didn't note, however, was the dBA / noise output. In this video, we compare the noise levels of AMD's two stock coolers for the FX-8370 CPU -- the Wraith and the 'old' unit.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge