This content piece will explore the performance anomalies and command line options for the Final Fantasy XV benchmark, with later pieces going detailed on CPU and GPU benchmarks. Prior to committing to massive GPU and CPU benchmarks, we always pretest the game to understand its performance behaviors and scaling across competing devices. For FFXV, we’ve already detailed FPS impact of benchmark duration, impact of graphics settings and resolution on scaling, we’ve used command line to automate and custom configure benchmarks, and we’ve discovered poor frametime performance under certain benchmarking conditions.

We started out by testing for run-to-run variance, which would be used to help locate outliers and determine how many test passes we need to conduct per device. In this frametime plot, you can see that the first test pass, illustrated on a GTX 1070 with the settings in the chart, exhibits significantly more volatile frametimes. The frame-to-frame interval occasionally slams into a wall during the first 6-minute test pass, causing noticeable, visible stutters in gameplay.

We’ve been working on our Final Fantasy XV benchmarking and already have multiple machines going, including both CPU and GPU testing. This process included discovery of run-to-run variance, pursuant to slow initialization of game resources during the first test pass. We can solve for this with additional test passes and by eliminating the first test pass from the data pool.

One of the downsides to Final Fantasy XV’s benchmark is that there is no customization for graphics settings: You’ve got High, “Middle,” and “Lite.” Critically, the medium settings seem to disable most of the nVidia GameWorks graphics options, which will impact performance between nVidia and AMD cards. We spoke with AMD about a driver update for the game, and have been informed that updated drivers will ship closer to the game’s launch. In the meantime, we’ll be testing High and Medium settings alike, building a database for relative performance scaling between AMD and nVidia. That content is due out soon.

While we’ve been working on programming our benchmark, reddit user “randomstranger454” grabbed Final Fantasy XV’s quality settings that create the presets. We will bold the settings we believe to be most interesting:

As everyone begins running the Final Fantasy XV PC benchmark, we’d like to notify the userbase that, on our test platform, we have observed some run-to-run variance in frame-to-frame intervals from one pass to the next. This seems to stem entirely from the first pass of the benchmark, where the game is likely still loading all of the assets into memory. After the first pass, we’ve routinely observed improved performance on runs two, three, and onward. We attribute this to first-time launcher initialization of all the game assets.

The short answer to the headline is “sometimes,” but it’s more complicated than just FPS over time. To really address this question, we have to first explain the oddity of FPS as a metric: Frames per second is inherently an average – if we tell you something is operating at a variable framerate, but is presently 60FPS, what does that really mean? If we look at the framerate at any given millisecond, given that framerate is inherently an average of a period of time, we must acknowledge that deriving spot-measurements in frames per second is inherently flawed. All this stated, the industry has accepted frames per second as a rating measure of performance for games, and it is one of the most user-friendly means to convey what the actual, underlying metric is: Frametime, or the frame-to-frame interval, measured in milliseconds.

Today, we’re releasing public some internal data that we’ve collected for benchmark validation. This data looks specifically at benchmark duration or optimization tests to min-max for maximum accuracy and card count against the minimum time required to retain said accuracy.

Before we publish any data for a benchmark – whether that’s gaming, thermals, or power – we run internal-only testing to validate our methods and thought process. This is often where we discover flaws in methods, which allow us to then refine them prior to publishing any review data. There are a few things we traditionally research for each game: Benchmark duration requirements, load level of a particular area of the game, the best- and worst-case performance scenarios in the game, and then the average expected performance for the user. We also regularly find shortcomings in test design – that’s the nature of working on a test suite for a year at a time. As with most things in life, the goal is to develop something good, then iterate on it as we learn from the process.

To everyone’s confusion, a review copy of Dragon Ball FighterZ for Xbox One showed up in our mailbox a few days ago. We’ve worked with Bandai Namco in the past, but never on console games. They must have cast a wide net with review samples--and judging by the SteamCharts stats, it worked.

It’d take some digging through the site archives to confirm, but we might never have covered a real fighting game before. None of us play them, we’ve tapered off doing non-benchmark game reviews, and they generally aren’t demanding enough to be hardware testing candidates (recommended specs for FighterZ include a 2GB GTX 660). For the latter reason, it’s a good thing they sent us the Xbox version. It’s “Xbox One X Enhanced,” but not officially listed as 4K, although that’s hard to tell at a glance: the resolution it outputs on a 4K display is well above 1080p, and the clear, bold lines of the cel-shaded art style make it practically indistinguishable from native 4K even during gameplay. Digital Foundry claims it’s 3264 x 1836 pixels, or 85% of 4K in height/width.

Today, we’re using Dragon Ball FighterZ to test our new console benchmarking tools, and further iterate upon them for -- frankly -- bigger future launches. This will enable us to run console vs. PC testing in greater depth going forward.

It’s been nearly a month since news broke on Meltdown and Spectre, but the tech industry is still swarming like an upturned anthill as patches have been tumultuous, hurting performance, causing reboots, and then getting halted and replaced, while major manufacturers try to downplay the problem. Yes, that sentence was almost entirely about Intel, but they aren’t the only ones affected. We now return to the scene of the crime, looking at the Meltdown and Spectre exploits with the assistance of several research teams behind the discovery of these attacks.

To summarize the summary of our previous article: Meltdown is generally agreed to be more severe, but limited to Intel, while Spectre has to do with a fundamental aspect of CPUs made in the past 20 years. They involve an important technique used by modern CPUs to increase efficiency, called “speculative execution,” which is allows a CPU to preemptively queue-up tasks it speculates will next occur. Sometimes, these cycles are wasted, as the actions never occur as predicted; however, most of the time, speculating on incoming jobs will greatly improve efficiency of the processor by preemptively computing the inbound instructions. That’s not the focus of this article, but this Medium article provides a good intermediate-level explanation of the mechanics, as do the Spectre and Meltdown whitepapers themselves. For now, it’s important to know that although “speculative execution” is a buzzword being tossed around a lot, it isn’t in itself an exploit--the exploits just take advantage of it.

The most comprehensive hub of information on Meltdown and Spectre is the website hosted by Graz University of Technology in Austria, home of one of the research teams that discovered and reported them to Intel. That’s “one of” because there are no fewer than three other teams acknowledged by Graz that independently discovered and reported these vulnerabilities over the past few months. We’ve assembled a rough timeline of events, with the aid of WIRED’s research:

This week's hardware news recap teases some of our upcoming content pieces, including a potential test on Dragonball FighterZ, along with pending-publication interviews of key Spectre & Meltdown researchers. In addition to that, as usual, we discuss major hardware news for the past few days. The headline item is the most notable, and pertains to Samsung's GDDR6 memory entering mass production, nearing readiness for deployment in future products. This will almost certainly include GPU products, alongside the expected mobile device deployments. We also talk AMD's new-hires and RTG restructure, its retiring of the implicit primitive discard accelerator for Vega, and SilverStone's new low-profile air cooler.

Show notes are below the embedded video.

A GTX 1080 Ti today costs the same as an entire PC build in 2017 – and one containing said 1080 Ti, at that. RAM today costs 2-4x its price in 2016 and 2017. SSDs, at best, have stagnated; at worst, some have increased in price marginally.

Today, we’re benchmarking a 2017 “MSRP” PC build versus a 2018 current-price PC build, using a $1500 budget. Our objective was to see how far we could push performance at around $1500, using only new components, when comparing the best prices of yesteryear versus the prices of today. If there must be a point to this content, the primary takeaway is to avoid purchasing new GPUs at prices so far beyond MSRP that they enter old flagship categories.

As for components, we’re using Intel as a baseline, as platform scalability makes more sense when tested between the same architectures (going to Ryzen, for instance, would make for better performance in Blender, but worse performance in games, thus killing the point of a like-for-like dollar-stretching benchmark). Intel has also had the most severe price swings in the past year, whereas AMD has remained largely steady at launch Ryzen prices, and has often dipped well under them. Intel has remained north of MSRP or at MSRP.

Our builds are as follows:

MSI GTX 1070 Ti DUKE Review: Thermals & Overclocking

By Published January 25, 2018 at 11:07 pm

We recently bought the MSI GTX 1070 Ti Duke for a separate PC build, and decided we’d go ahead and review the card while at it. The MSI GTX 1070 Ti Duke graphics card uses a three-fan cooler, which MSI seems to now be officially calling the “tri-frozr” cooler, and was among the more affordable GTX 1070 Ti cards on the market. That reign has ended as GPU prices have re-skyrocketed, but perhaps it’ll return again to $480. Until then, we’ll write this assuming that price. Beyond $480, it’s obviously not worth it, just to spell that out right now.

The MSI GTX 1070 Ti Duke has one of the thinner heatsinks of the 10-series cards, and a lot of that comes down to card form factor: The Duke fits in a 2-slot form factor, but runs a three-fan cooler. This mixture necessitates a thin, wide heatsink, which means relatively limited surface area for dissipation, but potentially quieter fans from the three-fan solution.

NOTE: We wrote this review before CES. Card prices have since skyrocketed. Do not buy any 1070 Ti for >$500. This card was reviewed assuming a $470-$480 price-point. Anything more than that, it's not worth it.

Ask GN returns! We're now on Episode 68, having taken a brief haitus for CES. A lot of questions piled-up in that time, and we dedicated the usual ~25 minutes to address as many as we could in that time window. Of note, a large number of you have been asking about GPU and RAM prices, and we recently had our own encounter with severe price surges at a Microcenter location. We also received questions pertaining to component failure, proper handling of components (ESD, oil/grease, etc.), GPU failure, and must-have tools for our arsenal. 

Oh, and as a bonus, we take a question on "dream testing methods" that are unachievable (presently) due to time/money/practicality limitations. Great questions this week from the community. Find the video and timestamps below:

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge