Game Benchmarks

Benchmarking in Vulkan or Dx12 is still a bit of a pain in the NAS, but PresentMon makes it possible to conduct accurate FPS and frametime tests without reliance upon FRAPS. July 11 marks DOOM's introduction of the Vulkan API in addition to its existing OpenGL 4.3/4.5 programming interfaces. Between the nVidia and AMD press events the last few months, we've seen id Software surface a few times to talk big about their Vulkan integration – but it's taken a while to finalize.

As we're in the midst of GTX 1060 benchmarking and other ongoing hardware reviews, this article is being kept short. Our test passes look only at the RX 480, GTX 1080, and GTX 970, so we're strictly looking at scalability on the new Polaris and Pascal architectures. The GTX 970 was thrown-in to see if there are noteworthy improvements for Vulkan when moving from Maxwell to Pascal.

This test is not meant to show if one video card is “better” than another (as our original Doom benchmark did), but is instead meant to show OpenGL → Vulkan scaling within a single card and architecture. Note that, as with any game, Doom is indicative only of performance and scaling within Doom. The results in other Vulkan games, like the Talos Principle, will not necessarily mirror these. The new APIs are complex enough that developers must carefully implement them (Vulkan or Dx12) to best exploit the low-level access. We spoke about this with Chris Roberts a while back, who offered up this relevant quote:

Mirror's Edge – the first game – had some of the most intensive graphics of its time. Just enabling PhysX alone was enough to bring most systems to their knees, particularly when choppers unloaded their miniguns into glass to create infinitesimal shards. The new game just came out, and aims to bring optimized, high-fidelity visuals to the series.

Our Mirror's Edge Catalyst graphics card benchmark tests FPS performance on the GTX 1080, 1070, 970, 960, AMD R9 Fury X, 390X, 380X, and more. We're trying to add more cards as we continue to circumvent the DRM activation restrictions – which we're mostly doing by purchasing the game on multiple accounts (update: we were able to get around the limitations with two codes, and it seems that the activation limitation expires after just 24 hours). The video card benchmark looks at performance scaling between High, Ultra, and “Hyper” settings, and runs the tests for 1080p (Ultra), 1440p (Ultra), and 4K (High), with a splash of 1080p/Hyper tests.

We've also looked briefly into VRAM consumption (further below) and have defined some of the core game graphics settings.

AMD was first-to-market with Doom-ready drivers, but exhibited exceptionally poor performance with a few of its cards. The R9 390X was one of those, being outperformed massively (~40%) by the GTX 970, and nearly matched by the GTX 960 at 1080p. If it's not apparent by the price difference between the two, that's unacceptable; the hardware of the R9 390X should effortlessly outperform the GTX 960, a budget-class card, and it just wasn't happening. Shortly after the game launched and AMD posted its initial driver set (16.5.2), a hotfix (16.5.2.1) was released to resolve performance issues on the R9 390 series cards.

We had a moment to re-benchmark DOOM using the latest drivers between our GTX 1080 Hybrid experiment and current travel to Asia. The good news: AMD's R9 390X has improved performance substantially – about 26% in some tests – and seem to be doing better. Other cards were unaffected by this hot fix (though we did test), so don't expect a performance gain out of your 380X, Fury X, or similar non-390-series device.

Note: These charts now include the GTX 1080 and its overclocked performance.

Following our GTX 1080 coverage of DOOM – and preempting the eventual review – we spent the time to execute GPU benchmarks of id Software's DOOM. The new FPS boasts high-fidelity visuals and fast-paced, Quake-era gameplay mechanics. Histrionic explosions dot Doom's hellscape, overblown only by its omnipresent red tint and magma flows. The game is heavy on particle effects and post-processing, performing much of its crunching toward the back of the GPU pipeline (after geometry and rasterization).

Geometry isn't particularly complex, with the game's indoor settings comprised almost entirely of labyrinthine corridors and rooms. Framerate fluctuates heavily; the more lighting effects and particle simulation in the camera frustum, the greater the swings in FPS as players emerge into or depart from lava-filled chambers and other areas of post-FX interest.

In this Doom graphics card benchmark, we test the framerate (FPS) of various GPUs in the new Doom “4” game, including the GTX 980 Ti, 980, 970, Fury X, 390X, 380X, and more. We'll briefly define game graphics settings first; game graphics definitions include brief discussion on TSSAA, directional occlusion quality, shadows, and more.

Note: Doom will soon add support for Vulkan. It's not here yet, but we've been told to expect Vulkan support within a few weeks of launch. All current tests were executed with OpenGL. We will revisit for Vulkan once the API is enabled.

Ashes of Singularity has become the poster-child for early DirectX 12 benchmarking, if only because it was the first-to-market with ground-up DirectX 12 and DirectX 11 support. Just minutes ago, the game officially updated its early build to include its DirectX 12 Benchmark Version 2, making critical changes that include cross-brand multi-GPU support. The benchmark also made updates to improve reliability and reproduction of results, primarily by giving all units 'god mode,' so inconsistent deaths don't impact workload.

For this benchmark, we tested explicit multi-GPU functionality by using AMD and nVidia cards at the same time, something we're calling “SLIFire” for ease. The benchmark specifically uses MSI R9 390X Gaming 8G and MSI GTX 970 Gaming 4G cards vs. 2x GTX 970s, 1x GTX 970, and 1x R9 390X for baseline comparisons.

Cloud Imperium Games' Star Citizen achieved a major milestone with the distribution of its Alpha 2.0 package, allowing multiplayer exploration in addition to existing dog-fighting and free flight. This release gives players the first glimpse of the game's open world intentions, presenting environments forged in Sci-Fi influence.

There's not much in the way of gameplay just yet, but Alpha 2.0 has been made available to all backers for initial bug- and stress-testing. We decided to conduct a test of our own, specifically looking at GPU performance and preset scaling across multiple “game modes.” Right now, because the pre-release game is comprised of several disjointed modules, there's no one “Play Star Citizen” button – it's split into parts. Racing, free flight, and dog-fighting are in one module (Arena Commander), the Hangar stands alone, and online testing with ArcCorp and Crusader were just released.

For our Star Citizen video card benchmark, we look at GPU vs. GPU performance in the race, delta performance scaling on ArcCorp and in the hangar or free flight, and talk methodology. The game isn't done and has yet to undergo performance optimizations and official driver support, so we won't be recommending the usual “best graphics cards for [game]” this time, as we usually do in our game benchmarks.

Rico's back in town. This time, the vigilante who saves the people by blowing up The People's Water Tower comes in high-fidelity graphics with a focus on lighting FX and water tech. Just Cause 3 revisits a partnership with nVidia's GameWorks development kit, making use of the WaveWorks tech that was previously found in Just Cause 2 (a 2010 release). The game's graphics settings are fairly simple for anyone following our game benchmarks, but we'll recap potential points of confusion further down.

Our Just Cause 3 GPU benchmark puts nVidia & AMD graphics cards to the test at 1080, 1440, and 4K resolutions, using “Very High” and “High” settings for FPS testing. Among others, the video card benchmark includes the 980 Ti (+ SLI), 980, 970, 960, et al., and AMD's 390X, 380X (+ CrossFire), 290X, 270X, et al.

We've noticed some curiosities with Just Cause 3's implementation of water detail scaling and will cover that further down.

Forthcoming team shooter Overwatch is Blizzard's first new IP in years, fusing familiar FPS and team-based elements with MOBA-like playable characters. That, at its core, is what we'd call a “team shooter,” a genre that's been popularized most recently by Team Fortress 2.

The game is still going through closed beta testing, with select Battle.net accounts receiving invites to play-test the game over a few weekends. This weekend's test was, according to Overwatch PR Manager Steven Khoo, an attempt at learning “how Overwatch runs on your system” and a reach-out for “technical feedback.” We figured we'd throw ten video cards at the game and see how it does.

Overwatch isn't particularly GPU intensive, but it does make use of some advanced shadow and reflection techniques that can impact FPS. We performed some initial settings analysis – shown further down – to determine top-level performance impact on a per-setting basis. This is the basis of our eventual graphics optimization guide (see: Black Ops equivalent), something we'll finalize at the game's launch. For now, the goal was to provide a foundation upon which to base our GPU test methodology with Overwatch. This graphics card benchmark looks at the best GPUs for Overwatch (beta), testing 1080p, 1440p, and 4K resolutions across “Epic” and “Ultra” settings.

Battlefront is one of the best-optimized games right now, strictly looking at the graphics-versus-framerate output across multiple GPUs. The game fronts brilliant shading, lighting, and post-FX, leveraging what appears to be some form of PBR (though we're not positive) to create a more realistic aesthetic without hammering draw calls and polys.

That was all tested on an X99 platform, though, so we figured it'd be worth a look at Battlefront's fluidity across our (still limited) CPU suite. We benchmarked Battlefront with the Intel lineup (G3258 to i7) and some of AMD's FX CPUs, including one APU + dGPU combination. Anything not present here means one of two things: We either don't have it or it is presently being used for another benchmark, which accounts for quite a few CPUs, given game launch season.

Assassin's Creed: Unity was the last AC title we benched, and it led us to an exciting discovery: Some games, like AC Unity did, will present a sizable performance disparity between 2GB and 4GB models of the same video card. The Assassin's Creed series has long been heralded as a standard bearer for lighting, shading, and FX technologies, emboldening its geometrically complex environments with supporting filtration and processing. Lighting and shadows propel games like Assassin's Creed to a point in visual fidelity where, without bloating poly-count beyond reason, the models and meshes look smooth and sculpted beyond the geometry's own means, were it unassisted by lighting.

Ubisoft's Assassin's Creed Syndicate was made available to us last night, at which point it was immediately used for benchmarking AMD's brand new R9 380X GPU. This graphics card benchmark of Assassin's Creed Syndicate tests the game at Ultra and Medium settings across 1080p, 1440p, and 4K resolutions. All the latest cards are present in our Syndicate GPU benchmark – the GTX 980 Ti, 980, 970, 960 4GB vs. 2GB, 950, and 750 Ti from nVidia; the R9 390X ($240), R9 380X, R9 290X, R9 285, and R9 270X from AMD.

Of most note, AC Syndicate carries Unity's legacy of truly accentuating the 2GB vs. 4GB VRAM gap in the GTX 960 cards, something that should, theoretically, propagate to other multi-size models (like the R9 380, if we had them).

Page 1 of 2

  VigLink badge