The past week of hardware news primarily centers around nVidia and AMD, both of whom are launching new GPUs under similar names to existing lines. This struck a chord with us, because the new GT 1030 silently launched by nVidia follows the exact same patterns AMD has taken with its rebranded RX 460s as “RX 560s,” despite having significant hardware changes underneath.

To be very clear, we strongly disagree with creating a new, worse product under the same product name and badging as previously. It is entirely irrelevant how close that product is in performance to the original - it’s not the same product, and that’s all that matters. It deserves a different name.

We spend most of the news video ranting about GPU naming by both companies, but also include a couple of other industry topics. Find the show notes below, or check the video for the more detailed story.

Find the show notes below, or watch the video:

This week's hardware news recap follows GTC 2018, where we had a host of nVidia and GPU-adjacent news to discuss. That's all recapped heavily in the video portion, as most of it was off-the-top reporting just after the show ended. For the rest, we talk 4K 144Hz displays, 3DMark's raytracing demo, AMD's Radeon Rays, the RX Vega 56 Red Devil card, and CTS Labs updates.

As for this week, we're back to lots of CPU testing, as we've been doing for the past few weeks now. We're also working on some secret projects that we'll more fully reveal soon. For the immediate future, we'll be at PAX East on Friday, April 6, and will be on a discussion panel with Bitwit Kyle and Corsair representatives. We're planning to record the panel for online viewing.

Revealed to press under embargo at last week’s GTC, the nVidia-hosted GPU Technology Conference, nVidia CEO Jensen Huang showcased the new TITAN W graphics card. The Titan W is nVidia’s first dual-GPU card in many years, and comes after the compute-focused Titan V GPU from 2017.

The nVidia Titan W graphics card hosts two V100 GPUs and 32GB of HBM2 memory, claiming a TDP of 500W and a price of $8,000.

“I’m really just proving to shareholders that I’m healthy,” Huang laughed after his fifth consecutive hour of talking about machine learning. “I could do this all day – and I will,” the CEO said, with a nod to PR, who immediately locked the doors to the room.

If, to you, the word "unpredictable" sounds like a positive attribute for a graphics card, ASRock has something you may want. ASRock used words like “unpredictable” and “mysterious” for its new Phantom Gaming official trailer, two adjectives used to describe an upcoming series of AMD Radeon-equipped graphics cards. This is ASRock’s first time entering the graphics card space, where the company’s PCB designers will be faced with new challenges for AMD RX Vega GPUs (and future architectures).

The branding is for “Phantom” graphics cards, and the first-teased card appears to be using a somewhat standard dual-axial fan design with a traditional aluminum finstack and ~6mm heatpipes. Single 8-pin header is shown in the rendered teaser card, but as a render, we’re not sure what the actual product will look like.

As part of our new and ongoing “Bench Theory” series, we are publishing a year’s worth of internal-only data that we’ve used to drive our 2018 GPU test methodology. We haven’t yet implemented the 2018 test suite, but will be soon. The goal of this series is to help viewers and readers understand what goes into test design, and we aim to underscore the level of accuracy that GN demands for its publication. Our first information dump focused on benchmark duration, addressing when it’s appropriate to use 30-second runs, 60-second runs, and more. As we stated in the first piece, we ask that any content creators leveraging this research in their own testing properly credit GamersNexus for its findings.

Today, we’re looking at standard deviation and run-to-run variance in tested games. Games on bench cycle regularly, so the purpose is less for game-specific standard deviation (something we’re now addressing at game launch) and more for an overall understanding of how games deviate run-to-run. This is why conducting multiple, shorter test passes (see: benchmark duration) is often preferable to conducting fewer, longer passes; after all, we are all bound by the laws of time.

Looking at statistical dispersion can help understand whether a game itself is accurate enough for hardware benchmarks. If a game is inaccurate or varies wildly from one run to the next, we have to look at whether that variance is driver-, hardware-, or software-related. If it’s just the game, we must then ask the philosophical question of whether it’s the game we’re testing, or if it’s the hardware we’re testing. Sometimes, testing a game that has highly variable performance can still be valuable – primarily if it’s a game people want to play, like PUBG, despite having questionable performance. Other times, the game should be tossed. If the goal is a hardware benchmark and a game is behaving in outlier fashion, and also largely unplayed, then it becomes suspect as a test platform.

Our latest Ask GN episode talks methodology and benchmarking challenges with GPU boost, GDDR6 availability, "mining" on AMD's Radeon SSG, and more. This is also our first episode that comes with a new accompaniment, released in the form of a Patreon-only Ask GN. The separate video is  visible to Patreon backers, and answers a couple extra questions that were submitted via the Patreon Discord.

As usual, timestamps are provided below the embedded video. The major focus is on some quick GDDR6 news, then some discussion on GPU benchmarking approaches.

Find out more below:

Newegg today revoked its affiliate commission for video cards, which the company's sub-affiliate networks declare to be a change pursuant to "Bitcoin's unexpected popularity." This statement, of course, is comprised primarily of a misunderstanding or misattribution of the market (or bullshit, in other words), although it does consist of some truth. By "Bitcoin," we must first assume that the company really means "cryptocurrency," seeing as Bitcoin is functionally unminable on GPUs. Making this assumption still does not account for the GPU price increase, though; the price increase, as we've discussed on numerous occasions, is mostly resultant of GPU memory prices and GPU memory availability moving in inversely proportional directions. In recent interviews with manufacturers, we learned that 8GB of GDDR5 has increased in manufacturing cost, and has increased BOM, by $20-$30. From what we understand, GDDR5 price movements are typically on a scale of +/- $5, but the $20-$30 hike necessitated some vendors to officially raise GPU MSRP (not just third-party retail price, but actual MSRP).

Update: Square Enix is aware of this issue, has acknowledged its existence, and is working on an update for launch.

Although we don't believe this to be intentional, the Final Fantasy XV benchmark is among the most misleading we’ve encountered in recent history. This is likely a result of restrictive development timelines and a resistance to delaying product launch and, ultimately, that developers see this as "just" a benchmark. That said, the benchmark is what's used for folks to get an early idea of how their graphics cards will perform in the game. From what we've seen, that's not accurate to reality. Not only does the benchmark lack technology shown in tech demonstrations (we hope these will be added later, like strand deformation), but it is still taking performance hits for graphics settings that fail to materialize as visual fidelity improvements. Much of this stems from GameWorks settings, so we've been in contact with nVidia over these findings for the past few days.

As we discovered after hours of testing the utility, the FFXV benchmark is disingenuous in its execution, rendering load-intensive objects outside the camera frustum and resulting in a lower reported performance metric. We accessed the hexadecimal graphics settings for manual GameWorks setting tuning, made easier by exposing .INI files via a DLL, then later entered noclip mode to dig into some performance anomalies. On our own, we’d discovered that HairWorks toggling (on/off) had performance impact in areas where no hair existed. The only reason this would happen, aside from anomalous bugs or improper use of HairWorks (also likely, and not mutually exclusive), would be if the single hair-endowed creature in the benchmark were drawn at all times.

The benchmark is rendering creatures that use HairWorks even when they’re miles away from the character and the camera. Again, this was made evident while running benchmarks in a zone with no hairworks whatsoever – zero, none – at which point we realized, by accessing the game’s settings files, that disabling HairWorks would still improve performance even when no hairworks objects were on screen. Validation is easy, too: Testing the custom graphics settings file by toggling each setting, we're able to (1) individually confirm when Flow is disabled (the fire effect changes), (2) when Turf is disabled (grass strands become textures or, potentially, particle meshes), (3) when Terrain is enabled (shows tessellation of the ground at the demo start' terrain is pushed down and deformed, while protrusions are pulled up), and (3) when HairWorks is disabled (buffalo hair becomes a planar alpha texture). We're also able to confirm, by testing the default "High," "Standard," and "Low" settings, that the game's default GameWorks configuration is set to the following (High settings):

  • VXAO: Off
  • Shadow libs: Off
  • Flow: On
  • HairWorks: On
  • TerrainTessellation: On
  • Turf: On

Benchmarking custom settings matching the above results in identical performance to the benchmark launcher window, validating that these are the stock settings. We must use the custom settings approach, as going between Medium and High offers no settings customization, and also changes multiple settings simultaneously. To isolate whether a performance change is from GameWorks versus view distance and other settings, we must individually test each GameWorks setting from a baseline configuration of "High." 

Final Fantasy XV is shaping up to be intensely demanding of GPU hardware, with greater deltas developing between nVidia & AMD devices at High settings than Medium settings. The implication is that, although other graphics settings (LOD, draw distance) change between High and Medium, the most significant change is that of GameWorks options. HairWorks, Shadow libraries, and heavy ground tessellation are all toggled on with High and off with Medium. The ground tessellation is one of the most impactful to performance, particularly on AMD hardware; that said, although nVidia fares better, the 10-series GPUs still struggle with frametime consistency when running all the GameWorks options. This is something we’re investigating further, as we’ve (since writing this benchmark) discovered how to toggle graphics settings individually, something natively disabled in the FFXV benchmark. Stay tuned for that content.

In the meantime, we still have some unique GPU benchmarks and technical graphics analysis for you. One of our value adds is 1440p benchmarks, which are, for some inexplicable reason, disabled in the native FFXV benchmark client. We automated and scripted our benchmarks, enabling us to run tests at alternative resolutions. Another value-add is that we’re controlling our benchmarks; although it is admirable and interesting that Square Enix is collecting and aggregating user benchmark data, that data is also poisoned. The card hierarchy makes little sense at times, and that’s because users run benchmarks with any manner of variables – none of which are accounted for (or even publicly logged) in the FFXV benchmark utility.

Separately, we also confirmed with Square Enix that the graphics settings are the same for all default resolutions, something that we had previously questioned.

We recently bought the MSI GTX 1070 Ti Duke for a separate PC build, and decided we’d go ahead and review the card while at it. The MSI GTX 1070 Ti Duke graphics card uses a three-fan cooler, which MSI seems to now be officially calling the “tri-frozr” cooler, and was among the more affordable GTX 1070 Ti cards on the market. That reign has ended as GPU prices have re-skyrocketed, but perhaps it’ll return again to $480. Until then, we’ll write this assuming that price. Beyond $480, it’s obviously not worth it, just to spell that out right now.

The MSI GTX 1070 Ti Duke has one of the thinner heatsinks of the 10-series cards, and a lot of that comes down to card form factor: The Duke fits in a 2-slot form factor, but runs a three-fan cooler. This mixture necessitates a thin, wide heatsink, which means relatively limited surface area for dissipation, but potentially quieter fans from the three-fan solution.

NOTE: We wrote this review before CES. Card prices have since skyrocketed. Do not buy any 1070 Ti for >$500. This card was reviewed assuming a $470-$480 price-point. Anything more than that, it's not worth it.

Page 1 of 39

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge