If, to you, the word "unpredictable" sounds like a positive attribute for a graphics card, ASRock has something you may want. ASRock used words like “unpredictable” and “mysterious” for its new Phantom Gaming official trailer, two adjectives used to describe an upcoming series of AMD Radeon-equipped graphics cards. This is ASRock’s first time entering the graphics card space, where the company’s PCB designers will be faced with new challenges for AMD RX Vega GPUs (and future architectures).

The branding is for “Phantom” graphics cards, and the first-teased card appears to be using a somewhat standard dual-axial fan design with a traditional aluminum finstack and ~6mm heatpipes. Single 8-pin header is shown in the rendered teaser card, but as a render, we’re not sure what the actual product will look like.

The latest Ask GN brings us to episode #70. We’ve been running this series for a few years now, but the questions remain top-notch. For this past week, viewers asked about nVidia’s “Ampere” and “Turing” architectures – or the rumored ones, anyway – and what we know of the naming. For other core component questions, Raven Ridge received a quick note on out-of-box motherboard support and BIOS flashing.

Non-core questions pertained to cooling, like the “best” CLCs when normalizing for fans, or hybrid-cooled graphics VRM and VRAM temperatures. Mousepad engineering got something of an interesting sideshoot, for which we recruited engineers at Logitech for insight on mouse sensor interaction with surfaces.

More at the video below, or find our Patreon special here.

As part of our new and ongoing “Bench Theory” series, we are publishing a year’s worth of internal-only data that we’ve used to drive our 2018 GPU test methodology. We haven’t yet implemented the 2018 test suite, but will be soon. The goal of this series is to help viewers and readers understand what goes into test design, and we aim to underscore the level of accuracy that GN demands for its publication. Our first information dump focused on benchmark duration, addressing when it’s appropriate to use 30-second runs, 60-second runs, and more. As we stated in the first piece, we ask that any content creators leveraging this research in their own testing properly credit GamersNexus for its findings.

Today, we’re looking at standard deviation and run-to-run variance in tested games. Games on bench cycle regularly, so the purpose is less for game-specific standard deviation (something we’re now addressing at game launch) and more for an overall understanding of how games deviate run-to-run. This is why conducting multiple, shorter test passes (see: benchmark duration) is often preferable to conducting fewer, longer passes; after all, we are all bound by the laws of time.

Looking at statistical dispersion can help understand whether a game itself is accurate enough for hardware benchmarks. If a game is inaccurate or varies wildly from one run to the next, we have to look at whether that variance is driver-, hardware-, or software-related. If it’s just the game, we must then ask the philosophical question of whether it’s the game we’re testing, or if it’s the hardware we’re testing. Sometimes, testing a game that has highly variable performance can still be valuable – primarily if it’s a game people want to play, like PUBG, despite having questionable performance. Other times, the game should be tossed. If the goal is a hardware benchmark and a game is behaving in outlier fashion, and also largely unplayed, then it becomes suspect as a test platform.

APU reviews have historically proven binary: Either it’s better to buy a dGPU and dirt-cheap CPU, or it’s actually a good deal. There is zero room for middle-ground in a market that’s targeting $150-$180 purchases. There’s no room to be wishy-washy, and no room for if/but/then arguments: It’s either better value than a dGPU + CPU, or it’s not worthwhile.

Preceding our impending Raven Ridge 2400G benchmarks, we decided to test the G4560 and R3 1200 with the best GPU money can buy – because it’s literally the only GPU you can buy right now. That’d be the GT 1030. Coupled with the G4560 (~$72), we land at ~$160 for both parts, depending on the momentary fluctuations of retailers. With the R3 1200, we land at about $180 for both. The 2400G is priced at $170, or thereabouts, and lands between the two.

(Note: The 2400G & 2200G appear to already be listed on retailers, despite the fact that, at time of writing, embargo is still on)

Our latest Ask GN episode talks methodology and benchmarking challenges with GPU boost, GDDR6 availability, "mining" on AMD's Radeon SSG, and more. This is also our first episode that comes with a new accompaniment, released in the form of a Patreon-only Ask GN. The separate video is  visible to Patreon backers, and answers a couple extra questions that were submitted via the Patreon Discord.

As usual, timestamps are provided below the embedded video. The major focus is on some quick GDDR6 news, then some discussion on GPU benchmarking approaches.

Find out more below:

This hardware news round-up covers the past week in PC hardware, including information on AMD's Ryzen+Vega amalgam, CPU "shortage" sensationalism, Newegg commission changes, and more. As usual, our HW News series is written as a video, but we publish show notes alongside the video. We'll leave those below the embed.

The big news for the week was AMD's 2400G & 2200G APUs, which are due out on Monday of next week. The higher-end APU will be priced around $170, and will primarily compete with low-end CPU+GPU combinations (e.g. GT 1030 and low-end R3). Of course, the APUs also carve an interesting niche in a market with limited dGPU supply. Strategically, this is a good launch window for AMD APUs.

Newegg today revoked its affiliate commission for video cards, which the company's sub-affiliate networks declare to be a change pursuant to "Bitcoin's unexpected popularity." This statement, of course, is comprised primarily of a misunderstanding or misattribution of the market (or bullshit, in other words), although it does consist of some truth. By "Bitcoin," we must first assume that the company really means "cryptocurrency," seeing as Bitcoin is functionally unminable on GPUs. Making this assumption still does not account for the GPU price increase, though; the price increase, as we've discussed on numerous occasions, is mostly resultant of GPU memory prices and GPU memory availability moving in inversely proportional directions. In recent interviews with manufacturers, we learned that 8GB of GDDR5 has increased in manufacturing cost, and has increased BOM, by $20-$30. From what we understand, GDDR5 price movements are typically on a scale of +/- $5, but the $20-$30 hike necessitated some vendors to officially raise GPU MSRP (not just third-party retail price, but actual MSRP).

Update: Square Enix is aware of this issue, has acknowledged its existence, and is working on an update for launch.

Although we don't believe this to be intentional, the Final Fantasy XV benchmark is among the most misleading we’ve encountered in recent history. This is likely a result of restrictive development timelines and a resistance to delaying product launch and, ultimately, that developers see this as "just" a benchmark. That said, the benchmark is what's used for folks to get an early idea of how their graphics cards will perform in the game. From what we've seen, that's not accurate to reality. Not only does the benchmark lack technology shown in tech demonstrations (we hope these will be added later, like strand deformation), but it is still taking performance hits for graphics settings that fail to materialize as visual fidelity improvements. Much of this stems from GameWorks settings, so we've been in contact with nVidia over these findings for the past few days.

As we discovered after hours of testing the utility, the FFXV benchmark is disingenuous in its execution, rendering load-intensive objects outside the camera frustum and resulting in a lower reported performance metric. We accessed the hexadecimal graphics settings for manual GameWorks setting tuning, made easier by exposing .INI files via a DLL, then later entered noclip mode to dig into some performance anomalies. On our own, we’d discovered that HairWorks toggling (on/off) had performance impact in areas where no hair existed. The only reason this would happen, aside from anomalous bugs or improper use of HairWorks (also likely, and not mutually exclusive), would be if the single hair-endowed creature in the benchmark were drawn at all times.

The benchmark is rendering creatures that use HairWorks even when they’re miles away from the character and the camera. Again, this was made evident while running benchmarks in a zone with no hairworks whatsoever – zero, none – at which point we realized, by accessing the game’s settings files, that disabling HairWorks would still improve performance even when no hairworks objects were on screen. Validation is easy, too: Testing the custom graphics settings file by toggling each setting, we're able to (1) individually confirm when Flow is disabled (the fire effect changes), (2) when Turf is disabled (grass strands become textures or, potentially, particle meshes), (3) when Terrain is enabled (shows tessellation of the ground at the demo start' terrain is pushed down and deformed, while protrusions are pulled up), and (3) when HairWorks is disabled (buffalo hair becomes a planar alpha texture). We're also able to confirm, by testing the default "High," "Standard," and "Low" settings, that the game's default GameWorks configuration is set to the following (High settings):

  • VXAO: Off
  • Shadow libs: Off
  • Flow: On
  • HairWorks: On
  • TerrainTessellation: On
  • Turf: On

Benchmarking custom settings matching the above results in identical performance to the benchmark launcher window, validating that these are the stock settings. We must use the custom settings approach, as going between Medium and High offers no settings customization, and also changes multiple settings simultaneously. To isolate whether a performance change is from GameWorks versus view distance and other settings, we must individually test each GameWorks setting from a baseline configuration of "High." 

Final Fantasy XV is shaping up to be intensely demanding of GPU hardware, with greater deltas developing between nVidia & AMD devices at High settings than Medium settings. The implication is that, although other graphics settings (LOD, draw distance) change between High and Medium, the most significant change is that of GameWorks options. HairWorks, Shadow libraries, and heavy ground tessellation are all toggled on with High and off with Medium. The ground tessellation is one of the most impactful to performance, particularly on AMD hardware; that said, although nVidia fares better, the 10-series GPUs still struggle with frametime consistency when running all the GameWorks options. This is something we’re investigating further, as we’ve (since writing this benchmark) discovered how to toggle graphics settings individually, something natively disabled in the FFXV benchmark. Stay tuned for that content.

In the meantime, we still have some unique GPU benchmarks and technical graphics analysis for you. One of our value adds is 1440p benchmarks, which are, for some inexplicable reason, disabled in the native FFXV benchmark client. We automated and scripted our benchmarks, enabling us to run tests at alternative resolutions. Another value-add is that we’re controlling our benchmarks; although it is admirable and interesting that Square Enix is collecting and aggregating user benchmark data, that data is also poisoned. The card hierarchy makes little sense at times, and that’s because users run benchmarks with any manner of variables – none of which are accounted for (or even publicly logged) in the FFXV benchmark utility.

Separately, we also confirmed with Square Enix that the graphics settings are the same for all default resolutions, something that we had previously questioned.

We recently bought the MSI GTX 1070 Ti Duke for a separate PC build, and decided we’d go ahead and review the card while at it. The MSI GTX 1070 Ti Duke graphics card uses a three-fan cooler, which MSI seems to now be officially calling the “tri-frozr” cooler, and was among the more affordable GTX 1070 Ti cards on the market. That reign has ended as GPU prices have re-skyrocketed, but perhaps it’ll return again to $480. Until then, we’ll write this assuming that price. Beyond $480, it’s obviously not worth it, just to spell that out right now.

The MSI GTX 1070 Ti Duke has one of the thinner heatsinks of the 10-series cards, and a lot of that comes down to card form factor: The Duke fits in a 2-slot form factor, but runs a three-fan cooler. This mixture necessitates a thin, wide heatsink, which means relatively limited surface area for dissipation, but potentially quieter fans from the three-fan solution.

NOTE: We wrote this review before CES. Card prices have since skyrocketed. Do not buy any 1070 Ti for >$500. This card was reviewed assuming a $470-$480 price-point. Anything more than that, it's not worth it.

Page 1 of 26

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge