FFXV Hyperthreading & SMT On vs. Off Benchmarks

By Published February 07, 2018 at 5:28 pm

Despite having just called the FFXV benchmark “useless” and “misleading,” we did still have some data left over that we wanted to publish before moving on. We were in the middle of benchmarking all of our CPUs when discovering the game’s two separate culling and LOD issues (which Square Enix has addressed and is fixing), and ended up stopping all tests upon that discovery. That said, we still had some interesting data collected on SMT and Hyperthreading, and we wanted to publish that before shelving the game for launch.

We started testing with the R7 1700 and i7-8700K a few days ago, looking at numThreads=X settings in command line to search for performance deltas. Preliminary testing revealed that these settings provided performance uplift to a point of 8 threads, beyond or under which we observed diminishing returns.

We recently published a deep-dive that discovered a lack of lower LOD scaling to HairWorks effects in FFXV, an issue we attributed to Square Enix and flagged to nVidia. We further noted that it wasn’t just GameWorks effects, but entire models were being drawn when miles away from the player. Following the report, Square Enix’s official FFXV twitter account (@FFXVEN) has released a series of tweets about the issue, noting: “A Level of Detail (LOD) issue has been discovered that affects the benchmark scores. The benchmark also suffers from stuttering; both of the issues will be addressed in the shipping game.”

Samsung Confirms ASIC Miner Production

By Published February 05, 2018 at 5:02 pm

Samsung recently officially confirmed that they are producing ASICs (Application-Specific Integrated Circuits) intended for cryptocurrency mining, being sold to unnamed clients for ASIC mining machines. These machines are different from GPU miners, and do not meaningfully affect desktop GPU supply.

As the name implies, ASICs are chips designed for a single purpose. There’s nothing unusual about producing ASICs, but mining-specific ones have been the domain of TSMC until now, primarily with client Bitmain. Samsung won’t be doing the mining themselves, just supplying the hardware: TechPowerUp suggests the order was placed by “Chinese clients” which were mentioned in a recent earnings report. Our understanding is that the varieties of cryptocurrency which ASICs can effectively mine are ones that are now beyond the capabilities of home mining operations, like Bitcoin, so they’re used by massive currency farms. SHA-256 algorithms are best mined with ASIC miners.

Update: Square Enix is aware of this issue, has acknowledged its existence, and is working on an update for launch.

Although we don't believe this to be intentional, the Final Fantasy XV benchmark is among the most misleading we’ve encountered in recent history. This is likely a result of restrictive development timelines and a resistance to delaying product launch and, ultimately, that developers see this as "just" a benchmark. That said, the benchmark is what's used for folks to get an early idea of how their graphics cards will perform in the game. From what we've seen, that's not accurate to reality. Not only does the benchmark lack technology shown in tech demonstrations (we hope these will be added later, like strand deformation), but it is still taking performance hits for graphics settings that fail to materialize as visual fidelity improvements. Much of this stems from GameWorks settings, so we've been in contact with nVidia over these findings for the past few days.

As we discovered after hours of testing the utility, the FFXV benchmark is disingenuous in its execution, rendering load-intensive objects outside the camera frustum and resulting in a lower reported performance metric. We accessed the hexadecimal graphics settings for manual GameWorks setting tuning, made easier by exposing .INI files via a DLL, then later entered noclip mode to dig into some performance anomalies. On our own, we’d discovered that HairWorks toggling (on/off) had performance impact in areas where no hair existed. The only reason this would happen, aside from anomalous bugs or improper use of HairWorks (also likely, and not mutually exclusive), would be if the single hair-endowed creature in the benchmark were drawn at all times.

The benchmark is rendering creatures that use HairWorks even when they’re miles away from the character and the camera. Again, this was made evident while running benchmarks in a zone with no hairworks whatsoever – zero, none – at which point we realized, by accessing the game’s settings files, that disabling HairWorks would still improve performance even when no hairworks objects were on screen. Validation is easy, too: Testing the custom graphics settings file by toggling each setting, we're able to (1) individually confirm when Flow is disabled (the fire effect changes), (2) when Turf is disabled (grass strands become textures or, potentially, particle meshes), (3) when Terrain is enabled (shows tessellation of the ground at the demo start' terrain is pushed down and deformed, while protrusions are pulled up), and (3) when HairWorks is disabled (buffalo hair becomes a planar alpha texture). We're also able to confirm, by testing the default "High," "Standard," and "Low" settings, that the game's default GameWorks configuration is set to the following (High settings):

  • VXAO: Off
  • Shadow libs: Off
  • Flow: On
  • HairWorks: On
  • TerrainTessellation: On
  • Turf: On

Benchmarking custom settings matching the above results in identical performance to the benchmark launcher window, validating that these are the stock settings. We must use the custom settings approach, as going between Medium and High offers no settings customization, and also changes multiple settings simultaneously. To isolate whether a performance change is from GameWorks versus view distance and other settings, we must individually test each GameWorks setting from a baseline configuration of "High." 

Final Fantasy XV is shaping up to be intensely demanding of GPU hardware, with greater deltas developing between nVidia & AMD devices at High settings than Medium settings. The implication is that, although other graphics settings (LOD, draw distance) change between High and Medium, the most significant change is that of GameWorks options. HairWorks, Shadow libraries, and heavy ground tessellation are all toggled on with High and off with Medium. The ground tessellation is one of the most impactful to performance, particularly on AMD hardware; that said, although nVidia fares better, the 10-series GPUs still struggle with frametime consistency when running all the GameWorks options. This is something we’re investigating further, as we’ve (since writing this benchmark) discovered how to toggle graphics settings individually, something natively disabled in the FFXV benchmark. Stay tuned for that content.

In the meantime, we still have some unique GPU benchmarks and technical graphics analysis for you. One of our value adds is 1440p benchmarks, which are, for some inexplicable reason, disabled in the native FFXV benchmark client. We automated and scripted our benchmarks, enabling us to run tests at alternative resolutions. Another value-add is that we’re controlling our benchmarks; although it is admirable and interesting that Square Enix is collecting and aggregating user benchmark data, that data is also poisoned. The card hierarchy makes little sense at times, and that’s because users run benchmarks with any manner of variables – none of which are accounted for (or even publicly logged) in the FFXV benchmark utility.

Separately, we also confirmed with Square Enix that the graphics settings are the same for all default resolutions, something that we had previously questioned.

This content piece will explore the performance anomalies and command line options for the Final Fantasy XV benchmark, with later pieces going detailed on CPU and GPU benchmarks. Prior to committing to massive GPU and CPU benchmarks, we always pretest the game to understand its performance behaviors and scaling across competing devices. For FFXV, we’ve already detailed FPS impact of benchmark duration, impact of graphics settings and resolution on scaling, we’ve used command line to automate and custom configure benchmarks, and we’ve discovered poor frametime performance under certain benchmarking conditions.

We started out by testing for run-to-run variance, which would be used to help locate outliers and determine how many test passes we need to conduct per device. In this frametime plot, you can see that the first test pass, illustrated on a GTX 1070 with the settings in the chart, exhibits significantly more volatile frametimes. The frame-to-frame interval occasionally slams into a wall during the first 6-minute test pass, causing noticeable, visible stutters in gameplay.

We’ve been working on our Final Fantasy XV benchmarking and already have multiple machines going, including both CPU and GPU testing. This process included discovery of run-to-run variance, pursuant to slow initialization of game resources during the first test pass. We can solve for this with additional test passes and by eliminating the first test pass from the data pool.

One of the downsides to Final Fantasy XV’s benchmark is that there is no customization for graphics settings: You’ve got High, “Middle,” and “Lite.” Critically, the medium settings seem to disable most of the nVidia GameWorks graphics options, which will impact performance between nVidia and AMD cards. We spoke with AMD about a driver update for the game, and have been informed that updated drivers will ship closer to the game’s launch. In the meantime, we’ll be testing High and Medium settings alike, building a database for relative performance scaling between AMD and nVidia. That content is due out soon.

While we’ve been working on programming our benchmark, reddit user “randomstranger454” grabbed Final Fantasy XV’s quality settings that create the presets. We will bold the settings we believe to be most interesting:

As everyone begins running the Final Fantasy XV PC benchmark, we’d like to notify the userbase that, on our test platform, we have observed some run-to-run variance in frame-to-frame intervals from one pass to the next. This seems to stem entirely from the first pass of the benchmark, where the game is likely still loading all of the assets into memory. After the first pass, we’ve routinely observed improved performance on runs two, three, and onward. We attribute this to first-time launcher initialization of all the game assets.

The short answer to the headline is “sometimes,” but it’s more complicated than just FPS over time. To really address this question, we have to first explain the oddity of FPS as a metric: Frames per second is inherently an average – if we tell you something is operating at a variable framerate, but is presently 60FPS, what does that really mean? If we look at the framerate at any given millisecond, given that framerate is inherently an average of a period of time, we must acknowledge that deriving spot-measurements in frames per second is inherently flawed. All this stated, the industry has accepted frames per second as a rating measure of performance for games, and it is one of the most user-friendly means to convey what the actual, underlying metric is: Frametime, or the frame-to-frame interval, measured in milliseconds.

Today, we’re releasing public some internal data that we’ve collected for benchmark validation. This data looks specifically at benchmark duration or optimization tests to min-max for maximum accuracy and card count against the minimum time required to retain said accuracy.

Before we publish any data for a benchmark – whether that’s gaming, thermals, or power – we run internal-only testing to validate our methods and thought process. This is often where we discover flaws in methods, which allow us to then refine them prior to publishing any review data. There are a few things we traditionally research for each game: Benchmark duration requirements, load level of a particular area of the game, the best- and worst-case performance scenarios in the game, and then the average expected performance for the user. We also regularly find shortcomings in test design – that’s the nature of working on a test suite for a year at a time. As with most things in life, the goal is to develop something good, then iterate on it as we learn from the process.

To everyone’s confusion, a review copy of Dragon Ball FighterZ for Xbox One showed up in our mailbox a few days ago. We’ve worked with Bandai Namco in the past, but never on console games. They must have cast a wide net with review samples--and judging by the SteamCharts stats, it worked.

It’d take some digging through the site archives to confirm, but we might never have covered a real fighting game before. None of us play them, we’ve tapered off doing non-benchmark game reviews, and they generally aren’t demanding enough to be hardware testing candidates (recommended specs for FighterZ include a 2GB GTX 660). For the latter reason, it’s a good thing they sent us the Xbox version. It’s “Xbox One X Enhanced,” but not officially listed as 4K, although that’s hard to tell at a glance: the resolution it outputs on a 4K display is well above 1080p, and the clear, bold lines of the cel-shaded art style make it practically indistinguishable from native 4K even during gameplay. Digital Foundry claims it’s 3264 x 1836 pixels, or 85% of 4K in height/width.

Today, we’re using Dragon Ball FighterZ to test our new console benchmarking tools, and further iterate upon them for -- frankly -- bigger future launches. This will enable us to run console vs. PC testing in greater depth going forward.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge