Final Fantasy XV is shaping up to be intensely demanding of GPU hardware, with greater deltas developing between nVidia & AMD devices at High settings than Medium settings. The implication is that, although other graphics settings (LOD, draw distance) change between High and Medium, the most significant change is that of GameWorks options. HairWorks, Shadow libraries, and heavy ground tessellation are all toggled on with High and off with Medium. The ground tessellation is one of the most impactful to performance, particularly on AMD hardware; that said, although nVidia fares better, the 10-series GPUs still struggle with frametime consistency when running all the GameWorks options. This is something we’re investigating further, as we’ve (since writing this benchmark) discovered how to toggle graphics settings individually, something natively disabled in the FFXV benchmark. Stay tuned for that content.

In the meantime, we still have some unique GPU benchmarks and technical graphics analysis for you. One of our value adds is 1440p benchmarks, which are, for some inexplicable reason, disabled in the native FFXV benchmark client. We automated and scripted our benchmarks, enabling us to run tests at alternative resolutions. Another value-add is that we’re controlling our benchmarks; although it is admirable and interesting that Square Enix is collecting and aggregating user benchmark data, that data is also poisoned. The card hierarchy makes little sense at times, and that’s because users run benchmarks with any manner of variables – none of which are accounted for (or even publicly logged) in the FFXV benchmark utility.

Separately, we also confirmed with Square Enix that the graphics settings are the same for all default resolutions, something that we had previously questioned.

This content piece will explore the performance anomalies and command line options for the Final Fantasy XV benchmark, with later pieces going detailed on CPU and GPU benchmarks. Prior to committing to massive GPU and CPU benchmarks, we always pretest the game to understand its performance behaviors and scaling across competing devices. For FFXV, we’ve already detailed FPS impact of benchmark duration, impact of graphics settings and resolution on scaling, we’ve used command line to automate and custom configure benchmarks, and we’ve discovered poor frametime performance under certain benchmarking conditions.

We started out by testing for run-to-run variance, which would be used to help locate outliers and determine how many test passes we need to conduct per device. In this frametime plot, you can see that the first test pass, illustrated on a GTX 1070 with the settings in the chart, exhibits significantly more volatile frametimes. The frame-to-frame interval occasionally slams into a wall during the first 6-minute test pass, causing noticeable, visible stutters in gameplay.

We’ve been working on our Final Fantasy XV benchmarking and already have multiple machines going, including both CPU and GPU testing. This process included discovery of run-to-run variance, pursuant to slow initialization of game resources during the first test pass. We can solve for this with additional test passes and by eliminating the first test pass from the data pool.

One of the downsides to Final Fantasy XV’s benchmark is that there is no customization for graphics settings: You’ve got High, “Middle,” and “Lite.” Critically, the medium settings seem to disable most of the nVidia GameWorks graphics options, which will impact performance between nVidia and AMD cards. We spoke with AMD about a driver update for the game, and have been informed that updated drivers will ship closer to the game’s launch. In the meantime, we’ll be testing High and Medium settings alike, building a database for relative performance scaling between AMD and nVidia. That content is due out soon.

While we’ve been working on programming our benchmark, reddit user “randomstranger454” grabbed Final Fantasy XV’s quality settings that create the presets. We will bold the settings we believe to be most interesting:

As everyone begins running the Final Fantasy XV PC benchmark, we’d like to notify the userbase that, on our test platform, we have observed some run-to-run variance in frame-to-frame intervals from one pass to the next. This seems to stem entirely from the first pass of the benchmark, where the game is likely still loading all of the assets into memory. After the first pass, we’ve routinely observed improved performance on runs two, three, and onward. We attribute this to first-time launcher initialization of all the game assets.

The short answer to the headline is “sometimes,” but it’s more complicated than just FPS over time. To really address this question, we have to first explain the oddity of FPS as a metric: Frames per second is inherently an average – if we tell you something is operating at a variable framerate, but is presently 60FPS, what does that really mean? If we look at the framerate at any given millisecond, given that framerate is inherently an average of a period of time, we must acknowledge that deriving spot-measurements in frames per second is inherently flawed. All this stated, the industry has accepted frames per second as a rating measure of performance for games, and it is one of the most user-friendly means to convey what the actual, underlying metric is: Frametime, or the frame-to-frame interval, measured in milliseconds.

Today, we’re releasing public some internal data that we’ve collected for benchmark validation. This data looks specifically at benchmark duration or optimization tests to min-max for maximum accuracy and card count against the minimum time required to retain said accuracy.

Before we publish any data for a benchmark – whether that’s gaming, thermals, or power – we run internal-only testing to validate our methods and thought process. This is often where we discover flaws in methods, which allow us to then refine them prior to publishing any review data. There are a few things we traditionally research for each game: Benchmark duration requirements, load level of a particular area of the game, the best- and worst-case performance scenarios in the game, and then the average expected performance for the user. We also regularly find shortcomings in test design – that’s the nature of working on a test suite for a year at a time. As with most things in life, the goal is to develop something good, then iterate on it as we learn from the process.

Prior to the Ryzen launch, we discovered an issue with GTA V testing that would cause high-speed CPUs of a particular variety to stutter when achieving high framerates. Our first video didn’t conclude with a root cause, but we now believe the game is running into engine constraints – present on other RAGE games – that trigger choppy behavior on those CPUs. Originally, we only saw this on the best i5s – older gen i5 CPUs were not affected, as they were not fast enough to exceed the framerate limiter in GTA V (~187FPS, or thereabouts), and so never encountered the stutters. The newest i5 CPUs, like the 7600K and 6600K, would post high framerates, but lose consistency in frametimes. As an end user, the solution would be (interestingly) to increase your graphics quality, resolution, or otherwise bring FPS to around the 120-165 mark.

Then Ryzen came out, and then Ryzen 5 came out. With R5, we encountered a few stutters in GTA V when SMT was enabled and when the CPU was operating under conditions permitting the CPU to achieve the same high framerates as Intel Core i5-7600K CPUs. To better illustrate, we can actually turn down graphics settings to a point of forcing framerates to the max on 4C/8T R5 CPUs, relinquishing some of the performance constraint, and then encounter hard stuttering. In short: A higher framerate overall would result in a much worse experience for the player, both on i5 and R5 CPUs. The 4C/8T R5 CPUs exhibited this same stutter performance (as i5 CPUs) most heavily when SMT was disabled, at which point we spit out a graph like this:

 

Mirror's Edge – the first game – had some of the most intensive graphics of its time. Just enabling PhysX alone was enough to bring most systems to their knees, particularly when choppers unloaded their miniguns into glass to create infinitesimal shards. The new game just came out, and aims to bring optimized, high-fidelity visuals to the series.

Our Mirror's Edge Catalyst graphics card benchmark tests FPS performance on the GTX 1080, 1070, 970, 960, AMD R9 Fury X, 390X, 380X, and more. We're trying to add more cards as we continue to circumvent the DRM activation restrictions – which we're mostly doing by purchasing the game on multiple accounts (update: we were able to get around the limitations with two codes, and it seems that the activation limitation expires after just 24 hours). The video card benchmark looks at performance scaling between High, Ultra, and “Hyper” settings, and runs the tests for 1080p (Ultra), 1440p (Ultra), and 4K (High), with a splash of 1080p/Hyper tests.

We've also looked briefly into VRAM consumption (further below) and have defined some of the core game graphics settings.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge