Game Benchmarks

Ubisoft's newest dystopian efforts start strong with allusions to modern-day challenges pertaining to privacy and "cyber warfare," working to build-up our character as a counter-culture hacker. And, as with Ubisoft's other AAA titles, the game builds this world with high-resolution textures, geometrically complex and dense objects, taxing shadow/lighting systems, and an emphasis on graphics quality.

Watch Dogs 2 is a demanding title to run on modern hardware. We spent the first 1-2 hours of our time in Watch Dogs 2 simply studying the impact of various settings on performance, further studying locales and their performance hits. Areas with grass and foliage, we found, most heavily hit framerate. Nightfall or dark rain play a role in FPS hits, too, particularly when running high reflection qualities and headlight shadows.

We look at performance of 11 GPUs in this Watch Dogs 2 video card benchmark, including the RX 480 vs. GTX 1060, GTX 1070, GTX 1080, RX 470, R9 Fury X, and more.

We've been through Battlefield 1 a few times now. First were the GPU benchmarks, then the HBAO vs. SSAO benchmark, then the CPU benchmark. This time it's RAM, and the methodology remains mostly the same. Note that these results are not comparable to previous results because (1) the game has received updates, (2) memory spec has changed for this test, and (3) we have updated our graphics drivers. The test platforms and memory used are dynamic for this test, the rest remaining similar to what we've done in the past. That'll be defined in the methodology below.

Our CPU benchmark had us changing frequencies between test platforms as we tried to determine our test patterns and methodology / bench specs for the endeavor. During that exploratory process, we noticed that memory speeds of 3200MHz were measurably faster in heuristic testing than speeds of, say, 2400MHz. That was just done by eye, though; it wasn't an official benchmark, and we wanted to dedicate a separate piece to that.

This content benchmarks memory performance in Battlefield 1, focusing on RAM speed (e.g. 1600MHz, 1866, 2133, 2400, so forth) and capacity. We hope to answer whether 8GB is "enough" and find a sweet spot for price-performance in memory selection.

This benchmark took a while to complete. We first started benchmarking CPUs with Battlefield 1 just after our GPU content was published, but ran into questions that took some back-and-forth with AMD to sort out. Some of that conversation will be recapped here.

Our Battlefield 1 CPU benchmark is finally complete. We tested most of the active Skylake suite (i7-6700K down to i3-6300), the FX-8370, -8320E, and some Athlon CPUs. Unfortunately, we ran out of activations before getting to Haswell or last-gen CPUs, but those may be visited at some point in the future. Our next goal is to look into the impact of memory speed on BF1 performance, or determine if there is any at all.

Back on track, though: Today's feature piece is to determine at what point a CPU will begin bottlenecking performance elsewhere in the system when playing Battlefield 1. Our previous two content pieces related to Battlefield 1 are linked below:

The goal of this content is to show that HBAO and SSAO have negligible performance impact on Battlefield 1 performance when choosing between the two. This benchmark arose following our Battlefield 1 GPU performance analysis, which demonstrated consistent frametimes and frame delivery on both AMD and nVidia devices when using DirectX 11. Two of our YouTube commenters asked if HBAO would create a performance swing that would favor nVidia over AMD and, although we've discussed this topic with several games in the past, we decided to revisit for Battlefield 1. This time, we'll also spend a bit of time defining what ambient occlusion actually is, how screen-space occlusion relies on information strictly within the z-buffer, and then look at performance cost of HBAO in BF1.

We'd also recommend our previous graphics technology deep-dive, for folks who want a more technical explanation of what's going on for various AO technologies. Portions of this new article exist in the deep-dive.

Battlefield 1 marks the arrival of another title with DirectX 12 support – sort of. The game still supports DirectX 11, and thus Windows 7 and 8, but makes efforts to shift Dice and EA toward the new world of low-level APIs. This move comes at a bit of a cost, though; our testing of Battlefield 1 has uncovered some frametime variance issues on both nVidia and AMD devices, resolvable by reverting to DirectX 11. We'll explore that in this content.

In today's Battlefield 1 benchmark, we're strictly looking at GPU performance using DirectX 12 and DirectX 11, including the recent RX 400 series, GTX 10 series, GTX 9 series, and RX 300 series GPUs. Video cards tested include the RX 480, RX 470, RX 460, 390X, and Fury X from AMD and the GTX 1080, 1070, 1060, 970, and 960 from nVidia. We've got a couple others in there, too. We may separately look at CPU performance, but not today.

This BF1 benchmark bears with it extensive testing methodology, as always, and that's been fully detailed within the methodology section below. Please be sure that you check this section for any questions as to drivers, test tools, measurement methodology, or GPU choices. Note also that, as with all Origin titles, we were limited to five device changes per game code per day (24 hours). We've got three codes, so that allowed us up to 15 total device tests within our test period.

We had a clerical error in our original Gears of War 4 GPU benchmark, but that's been fully rectified with this content. The error was a mix of several variables, primarily having three different folks working on the benchmarks, and working with a game that has about 40 graphics settings. We also had our custom Python script (which works perfectly) for interpreting PresentMon, a new tool to FPS capture, and that threw enough production changes into the mix that we had to unpublish the content and correct it.

All of our tests, though, were good. That's the good news. The error was in chart generation, where nVidia and AMD cards were put on the same charts using different settings, creating an unintentional misrepresentation of our data. And as a reminder, that data was valid and accurate – it just wasn't put in the right place. My apologies for that. Thankfully, we caught that early and have fixed everything.

I've been in communication with AMD and nVidia all morning, so everyone is clear on what's going on. Our 4K charts were completely accurate, but the others needed a rework. We've corrected the charts and have added several new, accurately presented tests to add some value to our original benchmark. Some of that includes, for instance, new tests that look at Ultra performance on nVidia vs AMD properly, tests that look at the 3GB vs 6GB GTX 1060, and more.e titles distributed to both PC and Xbox, generally leveraging UWP as a link.

Gears of War 4 is a DirectX 12 title. To this end, the game requires Windows 10 to play – Anniversary Edition, to be specific about what Microsoft forces users to install – and grants lower level access to the GPU via the engine. Asynchronous compute is now supported in Gears of War 4, useful for both nVidia and AMD, and dozens of graphics options make for a brilliantly complex assortment of options for PC enthusiasts. In this regard, The Coalition has done well to deliver a PC title of high flexibility, going the next step further to meticulously detail the options with CPU, GPU, and memory intensive indicators. Configure the game in an ambitious way, and it'll warn the user of a specific setting which may cause issues on the detected hardware.

That's incredible, honestly. This takes what GTA V did by adding a VRAM slider, then furthers it several steps. We cannot commend The Coalition enough for not only supporting PC players, but for doing so in a way which is so explicitly built for fine-tuning and maximizing hardware on the market.

In this benchmark of Gears of War 4, we'll test the FPS of various GPUs at Ultra and High settings (4K, 1440p, 1080p), furthering our tests by splashing in an FPS scaling chart across Low, Medium, High, and Ultra graphics. The benchmarks include the GTX 1080, 1070, 1060, RX 480, 470, and 460, and then further include last gen's GTX 980 Ti, 970, 960, and 950 with AMD's R9 Fury X, R9 390X, and R9 380X.

Benchmarking in Vulkan or Dx12 is still a bit of a pain in the NAS, but PresentMon makes it possible to conduct accurate FPS and frametime tests without reliance upon FRAPS. July 11 marks DOOM's introduction of the Vulkan API in addition to its existing OpenGL 4.3/4.5 programming interfaces. Between the nVidia and AMD press events the last few months, we've seen id Software surface a few times to talk big about their Vulkan integration – but it's taken a while to finalize.

As we're in the midst of GTX 1060 benchmarking and other ongoing hardware reviews, this article is being kept short. Our test passes look only at the RX 480, GTX 1080, and GTX 970, so we're strictly looking at scalability on the new Polaris and Pascal architectures. The GTX 970 was thrown-in to see if there are noteworthy improvements for Vulkan when moving from Maxwell to Pascal.

This test is not meant to show if one video card is “better” than another (as our original Doom benchmark did), but is instead meant to show OpenGL → Vulkan scaling within a single card and architecture. Note that, as with any game, Doom is indicative only of performance and scaling within Doom. The results in other Vulkan games, like the Talos Principle, will not necessarily mirror these. The new APIs are complex enough that developers must carefully implement them (Vulkan or Dx12) to best exploit the low-level access. We spoke about this with Chris Roberts a while back, who offered up this relevant quote:

Mirror's Edge – the first game – had some of the most intensive graphics of its time. Just enabling PhysX alone was enough to bring most systems to their knees, particularly when choppers unloaded their miniguns into glass to create infinitesimal shards. The new game just came out, and aims to bring optimized, high-fidelity visuals to the series.

Our Mirror's Edge Catalyst graphics card benchmark tests FPS performance on the GTX 1080, 1070, 970, 960, AMD R9 Fury X, 390X, 380X, and more. We're trying to add more cards as we continue to circumvent the DRM activation restrictions – which we're mostly doing by purchasing the game on multiple accounts (update: we were able to get around the limitations with two codes, and it seems that the activation limitation expires after just 24 hours). The video card benchmark looks at performance scaling between High, Ultra, and “Hyper” settings, and runs the tests for 1080p (Ultra), 1440p (Ultra), and 4K (High), with a splash of 1080p/Hyper tests.

We've also looked briefly into VRAM consumption (further below) and have defined some of the core game graphics settings.

AMD was first-to-market with Doom-ready drivers, but exhibited exceptionally poor performance with a few of its cards. The R9 390X was one of those, being outperformed massively (~40%) by the GTX 970, and nearly matched by the GTX 960 at 1080p. If it's not apparent by the price difference between the two, that's unacceptable; the hardware of the R9 390X should effortlessly outperform the GTX 960, a budget-class card, and it just wasn't happening. Shortly after the game launched and AMD posted its initial driver set (16.5.2), a hotfix (16.5.2.1) was released to resolve performance issues on the R9 390 series cards.

We had a moment to re-benchmark DOOM using the latest drivers between our GTX 1080 Hybrid experiment and current travel to Asia. The good news: AMD's R9 390X has improved performance substantially – about 26% in some tests – and seem to be doing better. Other cards were unaffected by this hot fix (though we did test), so don't expect a performance gain out of your 380X, Fury X, or similar non-390-series device.

Note: These charts now include the GTX 1080 and its overclocked performance.

Following our GTX 1080 coverage of DOOM – and preempting the eventual review – we spent the time to execute GPU benchmarks of id Software's DOOM. The new FPS boasts high-fidelity visuals and fast-paced, Quake-era gameplay mechanics. Histrionic explosions dot Doom's hellscape, overblown only by its omnipresent red tint and magma flows. The game is heavy on particle effects and post-processing, performing much of its crunching toward the back of the GPU pipeline (after geometry and rasterization).

Geometry isn't particularly complex, with the game's indoor settings comprised almost entirely of labyrinthine corridors and rooms. Framerate fluctuates heavily; the more lighting effects and particle simulation in the camera frustum, the greater the swings in FPS as players emerge into or depart from lava-filled chambers and other areas of post-FX interest.

In this Doom graphics card benchmark, we test the framerate (FPS) of various GPUs in the new Doom “4” game, including the GTX 980 Ti, 980, 970, Fury X, 390X, 380X, and more. We'll briefly define game graphics settings first; game graphics definitions include brief discussion on TSSAA, directional occlusion quality, shadows, and more.

Note: Doom will soon add support for Vulkan. It's not here yet, but we've been told to expect Vulkan support within a few weeks of launch. All current tests were executed with OpenGL. We will revisit for Vulkan once the API is enabled.

Page 1 of 3

  VigLink badge