This content piece started with Buildzoid’s suggestion for us to install a custom VBIOS on our RX 570 for timing tuning tests. Our card proved temperamental with the custom VBIOS, so we ended up instead – for now – testing AMD’s built-in timing level options in the drivers. AMD’s GPU drivers have a drop-down option featuring “automatic,” “timing level 1,” and “timing level 2” settings for Radeon cards, all of which lack any formal definition within the drivers. We ran an RX 570 and a Vega 56 card through most of our tests with these timings options, using dozens of test passes across the 3DMark suite (for each line item) to minimize the error margins and help narrow-in the range of statistically significant results. We also ran “real” gaming workloads in addition to these 3DMark passes.
Were we to step it up, the next goal would be to use third-party tools to manually tune the memory timings, whether GDDR5 or HBM2, or custom VBIOSes on cards that are more stable. For now, we’ll focus on AMD’s built-in options.
Sea of Thieves, the multiplayer-adventure-survival-pirate simulator from Rare, has finally been released after months of betas and stress tests. Judging by the difficulty they’ve had keeping the servers up after all that preparation, it seems like it’s been pretty popular. This comparison looks at Sea of Thieves Xbox One X vs. PC graphics quality, equalized graphics settings, and framerate/frametime performance on the Xbox.
SoT is also one of the first really big multiplayer titles to be added to the “Xbox Play Anywhere Program.” That means that it’s playable on both Xbox One and Windows 10 with a single purchase (yes, it’s a Windows 10 exclusive DX11 game). Also, Xbox and PC players are free to encounter each other ingame or even party up together, with the only obvious downside being forced to interact with the Windows 10 store and Xbox app. Together, these two aspects make a PC vs Xbox a very interesting comparison, since any player that owns a PC and an Xbox could easily switch.
Final Fantasy XV recently released on PC, and given the attention we drew to the benchmark’s LOD and HairWorks issues, it’s only fair that we take a look at the finished product. Prior to the PC release, the best playable version of the game was the cracked Origin preload the Xbox One X version, so our baseline for this graphics comparison is the Xbox at 4K using the “high” preset.
To match our PC settings to the Xbox version, we first selected the default choice for every option, which got us 90% of the way there. That includes “Average” settings for Model LOD, Anisotropic Filtering, Lighting, Shadows, Ambient Occlusion, and Filtering. Assets (high-quality asset pack), Geomapping (ground tessellation), and all NVIDIA features were turned off, anti-aliasing was set to TAA, and motion blur was turned on. Although this wasn’t a performance test, we limited framerate to the Xbox’s cap of 30FPS for good measure, and set resolution scaling to 100% (since dynamic resolution isn’t available on PC). This is a pretty close approximation of what the Xbox is capable of, and it’s an encouraging sight--the Xbox’s “High” is the PC’s “Average” in almost every category.
This will be a quick one. There is some required viewing/reading before diving in: Previously, with the FFXV standalone benchmark release, we found significant culling deficiencies of objects in the game, including both GameWorks and non-GameWorks objects. This suggested overall inefficiency and hasty development, as opposed to some sort of malfeasance. Square Enix later tweeted rather direct acknowledgement of the benchmark’s issues, and began work to optimize the game (and the GameWorks integration) for launch.
Today’s test is a quick one. Square Enix launched a playable demo of Final Fantasy XV and, although it’s still not the complete game, we wanted to see if any of the object culling issues had been addressed. We were primarily interested in HairWorks LOD scaling, as that was previously an issue responsible for causing performance loss on both nVidia and AMD hardware – even when no HairWorks objects were anywhere remotely close to the player.
PlayerUnknown’s Battlegrounds was officially released on PC this past December, but it’s been playable via Steam Early Access for nearly a year now. In all that time, none of us have played the game, despite many requests for benchmarks. Games that are in active development don’t make for easy testing, and neither do exclusively multiplayer games with tons of variance. Even Overwatch has the ability to play against bots.
Now that PUBG is 1.0 on PC and sort-of-released on Xbox, though, we have extra motivation to buckle down and start testing. We chose to start with the Xbox One X version, since the lack of graphics options makes things simpler. It’s listed as both 4K HDR ready and Xbox One X Enhanced, so our primary testing was done at 4K, with additional Xbox One X benchmarking at 1080p for PUBG. Technically, it’s a “Game Preview,” but the list of other titles in this category makes it look like something that was created expressly for PUBG. It also costs full PC price, $30.
This deep-dive looks at PUBG framerate and frametime performance (which is shockingly bad for a console), along with graphics analysis of the game’s visuals. Although the article covers testing and benchmarking in slightly more depth, we’d also strongly recommend watching the video, as it contains visual representation of what’s happening in-game.
Despite having just called the FFXV benchmark “useless” and “misleading,” we did still have some data left over that we wanted to publish before moving on. We were in the middle of benchmarking all of our CPUs when discovering the game’s two separate culling and LOD issues (which Square Enix has addressed and is fixing), and ended up stopping all tests upon that discovery. That said, we still had some interesting data collected on SMT and Hyperthreading, and we wanted to publish that before shelving the game for launch.
We started testing with the R7 1700 and i7-8700K a few days ago, looking at numThreads=X settings in command line to search for performance deltas. Preliminary testing revealed that these settings provided performance uplift to a point of 8 threads, beyond or under which we observed diminishing returns.
Update: Square Enix is aware of this issue, has acknowledged its existence, and is working on an update for launch.
Although we don't believe this to be intentional, the Final Fantasy XV benchmark is among the most misleading we’ve encountered in recent history. This is likely a result of restrictive development timelines and a resistance to delaying product launch and, ultimately, that developers see this as "just" a benchmark. That said, the benchmark is what's used for folks to get an early idea of how their graphics cards will perform in the game. From what we've seen, that's not accurate to reality. Not only does the benchmark lack technology shown in tech demonstrations (we hope these will be added later, like strand deformation), but it is still taking performance hits for graphics settings that fail to materialize as visual fidelity improvements. Much of this stems from GameWorks settings, so we've been in contact with nVidia over these findings for the past few days.
As we discovered after hours of testing the utility, the FFXV benchmark is disingenuous in its execution, rendering load-intensive objects outside the camera frustum and resulting in a lower reported performance metric. We accessed the hexadecimal graphics settings for manual GameWorks setting tuning, made easier by exposing .INI files via a DLL, then later entered noclip mode to dig into some performance anomalies. On our own, we’d discovered that HairWorks toggling (on/off) had performance impact in areas where no hair existed. The only reason this would happen, aside from anomalous bugs or improper use of HairWorks (also likely, and not mutually exclusive), would be if the single hair-endowed creature in the benchmark were drawn at all times.
The benchmark is rendering creatures that use HairWorks even when they’re miles away from the character and the camera. Again, this was made evident while running benchmarks in a zone with no hairworks whatsoever – zero, none – at which point we realized, by accessing the game’s settings files, that disabling HairWorks would still improve performance even when no hairworks objects were on screen. Validation is easy, too: Testing the custom graphics settings file by toggling each setting, we're able to (1) individually confirm when Flow is disabled (the fire effect changes), (2) when Turf is disabled (grass strands become textures or, potentially, particle meshes), (3) when Terrain is enabled (shows tessellation of the ground at the demo start' terrain is pushed down and deformed, while protrusions are pulled up), and (3) when HairWorks is disabled (buffalo hair becomes a planar alpha texture). We're also able to confirm, by testing the default "High," "Standard," and "Low" settings, that the game's default GameWorks configuration is set to the following (High settings):
- VXAO: Off
- Shadow libs: Off
- Flow: On
- HairWorks: On
- TerrainTessellation: On
- Turf: On
Benchmarking custom settings matching the above results in identical performance to the benchmark launcher window, validating that these are the stock settings. We must use the custom settings approach, as going between Medium and High offers no settings customization, and also changes multiple settings simultaneously. To isolate whether a performance change is from GameWorks versus view distance and other settings, we must individually test each GameWorks setting from a baseline configuration of "High."
Final Fantasy XV is shaping up to be intensely demanding of GPU hardware, with greater deltas developing between nVidia & AMD devices at High settings than Medium settings. The implication is that, although other graphics settings (LOD, draw distance) change between High and Medium, the most significant change is that of GameWorks options. HairWorks, Shadow libraries, and heavy ground tessellation are all toggled on with High and off with Medium. The ground tessellation is one of the most impactful to performance, particularly on AMD hardware; that said, although nVidia fares better, the 10-series GPUs still struggle with frametime consistency when running all the GameWorks options. This is something we’re investigating further, as we’ve (since writing this benchmark) discovered how to toggle graphics settings individually, something natively disabled in the FFXV benchmark. Stay tuned for that content.
In the meantime, we still have some unique GPU benchmarks and technical graphics analysis for you. One of our value adds is 1440p benchmarks, which are, for some inexplicable reason, disabled in the native FFXV benchmark client. We automated and scripted our benchmarks, enabling us to run tests at alternative resolutions. Another value-add is that we’re controlling our benchmarks; although it is admirable and interesting that Square Enix is collecting and aggregating user benchmark data, that data is also poisoned. The card hierarchy makes little sense at times, and that’s because users run benchmarks with any manner of variables – none of which are accounted for (or even publicly logged) in the FFXV benchmark utility.
Separately, we also confirmed with Square Enix that the graphics settings are the same for all default resolutions, something that we had previously questioned.
This content piece will explore the performance anomalies and command line options for the Final Fantasy XV benchmark, with later pieces going detailed on CPU and GPU benchmarks. Prior to committing to massive GPU and CPU benchmarks, we always pretest the game to understand its performance behaviors and scaling across competing devices. For FFXV, we’ve already detailed FPS impact of benchmark duration, impact of graphics settings and resolution on scaling, we’ve used command line to automate and custom configure benchmarks, and we’ve discovered poor frametime performance under certain benchmarking conditions.
We started out by testing for run-to-run variance, which would be used to help locate outliers and determine how many test passes we need to conduct per device. In this frametime plot, you can see that the first test pass, illustrated on a GTX 1070 with the settings in the chart, exhibits significantly more volatile frametimes. The frame-to-frame interval occasionally slams into a wall during the first 6-minute test pass, causing noticeable, visible stutters in gameplay.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.