AMD's new Ryzen R5 2400G & R3 2200G APUs, codenamed "Raven Ridge," are available for sale ahead of official embargo lift. We've earmarked the pages, for anyone interested in getting a jump on the APUs. Note that, as always, we recommend waiting on reviews before purchase -- but we'll make it easier for you to find them. Our reviews of the 2200G & 2400G are pending arrival of parts, likely today/tomorrow, and we've pre-published some GT 1030 & low-end CPU testing. We'll fully finalize that content once the APUs are in.

For now, you can find the new APUs at these links:

Amazon

Amazon R3 2200G listing (public - at time of posting, this was $99)

Amazon R5 2400G listing (private - will go live closer to 9AM EST)

Newegg

Newegg R5 2400G listing (public - at time of posting, this was $190, a bit over MSRP)

Newegg R3 2200G listing (public - at time of posting, this was $130)

APU reviews have historically proven binary: Either it’s better to buy a dGPU and dirt-cheap CPU, or it’s actually a good deal. There is zero room for middle-ground in a market that’s targeting $150-$180 purchases. There’s no room to be wishy-washy, and no room for if/but/then arguments: It’s either better value than a dGPU + CPU, or it’s not worthwhile.

Preceding our impending Raven Ridge 2400G benchmarks, we decided to test the G4560 and R3 1200 with the best GPU money can buy – because it’s literally the only GPU you can buy right now. That’d be the GT 1030. Coupled with the G4560 (~$72), we land at ~$160 for both parts, depending on the momentary fluctuations of retailers. With the R3 1200, we land at about $180 for both. The 2400G is priced at $170, or thereabouts, and lands between the two.

(Note: The 2400G & 2200G appear to already be listed on retailers, despite the fact that, at time of writing, embargo is still on)

This hardware news round-up covers the past week in PC hardware, including information on AMD's Ryzen+Vega amalgam, CPU "shortage" sensationalism, Newegg commission changes, and more. As usual, our HW News series is written as a video, but we publish show notes alongside the video. We'll leave those below the embed.

The big news for the week was AMD's 2400G & 2200G APUs, which are due out on Monday of next week. The higher-end APU will be priced around $170, and will primarily compete with low-end CPU+GPU combinations (e.g. GT 1030 and low-end R3). Of course, the APUs also carve an interesting niche in a market with limited dGPU supply. Strategically, this is a good launch window for AMD APUs.

Despite having just called the FFXV benchmark “useless” and “misleading,” we did still have some data left over that we wanted to publish before moving on. We were in the middle of benchmarking all of our CPUs when discovering the game’s two separate culling and LOD issues (which Square Enix has addressed and is fixing), and ended up stopping all tests upon that discovery. That said, we still had some interesting data collected on SMT and Hyperthreading, and we wanted to publish that before shelving the game for launch.

We started testing with the R7 1700 and i7-8700K a few days ago, looking at numThreads=X settings in command line to search for performance deltas. Preliminary testing revealed that these settings provided performance uplift to a point of 8 threads, beyond or under which we observed diminishing returns.

Update: Square Enix is aware of this issue, has acknowledged its existence, and is working on an update for launch.

Although we don't believe this to be intentional, the Final Fantasy XV benchmark is among the most misleading we’ve encountered in recent history. This is likely a result of restrictive development timelines and a resistance to delaying product launch and, ultimately, that developers see this as "just" a benchmark. That said, the benchmark is what's used for folks to get an early idea of how their graphics cards will perform in the game. From what we've seen, that's not accurate to reality. Not only does the benchmark lack technology shown in tech demonstrations (we hope these will be added later, like strand deformation), but it is still taking performance hits for graphics settings that fail to materialize as visual fidelity improvements. Much of this stems from GameWorks settings, so we've been in contact with nVidia over these findings for the past few days.

As we discovered after hours of testing the utility, the FFXV benchmark is disingenuous in its execution, rendering load-intensive objects outside the camera frustum and resulting in a lower reported performance metric. We accessed the hexadecimal graphics settings for manual GameWorks setting tuning, made easier by exposing .INI files via a DLL, then later entered noclip mode to dig into some performance anomalies. On our own, we’d discovered that HairWorks toggling (on/off) had performance impact in areas where no hair existed. The only reason this would happen, aside from anomalous bugs or improper use of HairWorks (also likely, and not mutually exclusive), would be if the single hair-endowed creature in the benchmark were drawn at all times.

The benchmark is rendering creatures that use HairWorks even when they’re miles away from the character and the camera. Again, this was made evident while running benchmarks in a zone with no hairworks whatsoever – zero, none – at which point we realized, by accessing the game’s settings files, that disabling HairWorks would still improve performance even when no hairworks objects were on screen. Validation is easy, too: Testing the custom graphics settings file by toggling each setting, we're able to (1) individually confirm when Flow is disabled (the fire effect changes), (2) when Turf is disabled (grass strands become textures or, potentially, particle meshes), (3) when Terrain is enabled (shows tessellation of the ground at the demo start' terrain is pushed down and deformed, while protrusions are pulled up), and (3) when HairWorks is disabled (buffalo hair becomes a planar alpha texture). We're also able to confirm, by testing the default "High," "Standard," and "Low" settings, that the game's default GameWorks configuration is set to the following (High settings):

  • VXAO: Off
  • Shadow libs: Off
  • Flow: On
  • HairWorks: On
  • TerrainTessellation: On
  • Turf: On

Benchmarking custom settings matching the above results in identical performance to the benchmark launcher window, validating that these are the stock settings. We must use the custom settings approach, as going between Medium and High offers no settings customization, and also changes multiple settings simultaneously. To isolate whether a performance change is from GameWorks versus view distance and other settings, we must individually test each GameWorks setting from a baseline configuration of "High." 

Final Fantasy XV is shaping up to be intensely demanding of GPU hardware, with greater deltas developing between nVidia & AMD devices at High settings than Medium settings. The implication is that, although other graphics settings (LOD, draw distance) change between High and Medium, the most significant change is that of GameWorks options. HairWorks, Shadow libraries, and heavy ground tessellation are all toggled on with High and off with Medium. The ground tessellation is one of the most impactful to performance, particularly on AMD hardware; that said, although nVidia fares better, the 10-series GPUs still struggle with frametime consistency when running all the GameWorks options. This is something we’re investigating further, as we’ve (since writing this benchmark) discovered how to toggle graphics settings individually, something natively disabled in the FFXV benchmark. Stay tuned for that content.

In the meantime, we still have some unique GPU benchmarks and technical graphics analysis for you. One of our value adds is 1440p benchmarks, which are, for some inexplicable reason, disabled in the native FFXV benchmark client. We automated and scripted our benchmarks, enabling us to run tests at alternative resolutions. Another value-add is that we’re controlling our benchmarks; although it is admirable and interesting that Square Enix is collecting and aggregating user benchmark data, that data is also poisoned. The card hierarchy makes little sense at times, and that’s because users run benchmarks with any manner of variables – none of which are accounted for (or even publicly logged) in the FFXV benchmark utility.

Separately, we also confirmed with Square Enix that the graphics settings are the same for all default resolutions, something that we had previously questioned.

The short answer to the headline is “sometimes,” but it’s more complicated than just FPS over time. To really address this question, we have to first explain the oddity of FPS as a metric: Frames per second is inherently an average – if we tell you something is operating at a variable framerate, but is presently 60FPS, what does that really mean? If we look at the framerate at any given millisecond, given that framerate is inherently an average of a period of time, we must acknowledge that deriving spot-measurements in frames per second is inherently flawed. All this stated, the industry has accepted frames per second as a rating measure of performance for games, and it is one of the most user-friendly means to convey what the actual, underlying metric is: Frametime, or the frame-to-frame interval, measured in milliseconds.

Today, we’re releasing public some internal data that we’ve collected for benchmark validation. This data looks specifically at benchmark duration or optimization tests to min-max for maximum accuracy and card count against the minimum time required to retain said accuracy.

Before we publish any data for a benchmark – whether that’s gaming, thermals, or power – we run internal-only testing to validate our methods and thought process. This is often where we discover flaws in methods, which allow us to then refine them prior to publishing any review data. There are a few things we traditionally research for each game: Benchmark duration requirements, load level of a particular area of the game, the best- and worst-case performance scenarios in the game, and then the average expected performance for the user. We also regularly find shortcomings in test design – that’s the nature of working on a test suite for a year at a time. As with most things in life, the goal is to develop something good, then iterate on it as we learn from the process.

This week's hardware news recap teases some of our upcoming content pieces, including a potential test on Dragonball FighterZ, along with pending-publication interviews of key Spectre & Meltdown researchers. In addition to that, as usual, we discuss major hardware news for the past few days. The headline item is the most notable, and pertains to Samsung's GDDR6 memory entering mass production, nearing readiness for deployment in future products. This will almost certainly include GPU products, alongside the expected mobile device deployments. We also talk AMD's new-hires and RTG restructure, its retiring of the implicit primitive discard accelerator for Vega, and SilverStone's new low-profile air cooler.

Show notes are below the embedded video.

GamersNexus secured an early exclusive with the new Gigabyte Gaming 7 motherboard at CES 2018, equipped with what one could confidently assume is an AMD X470 chipset. Given information from AMD on launch timelines, it would also be reasonable to assume that the new motherboards can be expected for roughly April of this year, alongside AMD’s Ryzen CPU refresh. This is all information learned from AMD’s public data. As for the Gigabyte Gaming 7 motherboard, the first thing we noticed is that it has real heatsinks on the VRMs, and that it’s actually running what appears to be a higher-end configuration for what we would assume is the new Ryzen launch.

Starting with the heatsink, Gigabyte has taken pride in listening to media and community concerns about VRM heatsinks, and has now added an actual finstack atop its 10-phase Vcore VRM. To give an idea, we saw significant performance improvement on the EVGA X299 DARK motherboard with just the finned heatsinks, not even using the built-in fans. It’s upwards of 20 degrees Celsius improvement over the fat blocks, in some cases, since the blocks don’t provide any surface area.

There’s been a lot of talk of an “Intel bug” lately, to which we paid close attention upon the explosion of our Twitter, email, and YouTube accounts. The “bug” that has been discussed most commonly refers to a new attack vector that can break the bounding boxes of virtual environments, including virtual machines and virtual memory, that has been named “Meltdown.” This attack is known primarily to affect Intel at this time, with indeterminate effect on AMD and ARM. Another attack, “Spectre,” attacks through side channels in speculative execution and branch prediction, and is capable of fetching sensitive user information that is stored in physical memory. Both attacks are severe, and between the two of them, nearly every CPU on the market is affected in at least some capacity. The severity of the impact remains to be seen, and will be largely unveiled upon embargo lift, January 9th, at which time the companies will all be discussing solutions and shortcomings.

For this content piece, we’re focusing on coverage from a strict journalism and reporting perspective, as security and low-level processor exploits are far outside of our area of expertise. That said, a lot of you wanted to know our opinions or thoughts on the matter, so we decided to compile a report of research from around the web. Note that we are not providing opinion here, just facts, as we are not knowledgeable enough in the subject matter to hold strong opinions (well, outside of “this is bad”).

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge