Our leading story for this week is AMD's semi-custom Gonzalo APU for consoles, getting finalized now, although we also share some of that lead-story limelight with Der8auer. Der8auer, the world's favorite delidder and second favorite overclocker (we won't say who's first) has handily beaten our high score in the 3DMark Hall of Fame, and we now must respond to his challenge.
Plenty of other news for the week, too, like Intel's new Optane SSDs, IDC and Gartner reporting on CPU shortages, and the Spoiler exploit.
Elgato’s 4K60 Pro capture card is an internal PCIe x4 capture card capable of handling resolutions up to 3840x2160 at 60 frames per second, as the name implies. It launched in November with an MSRP of $400, and has remained around that price since.
The Amazon reviews for the 4K60 Pro are almost worthless, because Amazon considers the 4K60 Pro and Elgato’s 1080p-capable HD60 Pro to be varieties of the same product and groups their reviews together. There are only twenty-something reviews of the 4K60 compared to nearly two thousand for the HD60, so that may skew the results slightly. Of the three single-star reviews that are actually for the 4K60, one is from a gentleman who was expecting a seven-inch-long PCIe card to work in a laptop. As of this writing, nobody at all has reviewed it on Newegg, and it’s on sale for $12 off in both locations.
It doesn’t seem like these are flying off the shelves, which probably speaks more to the current demand for 4K 60FPS streaming than the product itself--it’s the cheapest of a very small number of 4K60-capable capture cards, and there’s not any consumer-level competition to speak of. $400 may seem like a lot, but the existing alternatives are much more expensive, like the Magewell Pro Capture HDMI 4K Plus, which (besides having an awful name) costs around $800-$900. The Magewell does have a heatsink and a fan, though, which the 4K60 Pro does not--more on that later.
This Elgato 4K60 Pro review looks at the capture card’s quality and capabilities for both console and PC capture, and also walks through some thermal and temperature measurements taken with thermocouples.
Sea of Thieves, the multiplayer-adventure-survival-pirate simulator from Rare, has finally been released after months of betas and stress tests. Judging by the difficulty they’ve had keeping the servers up after all that preparation, it seems like it’s been pretty popular. This comparison looks at Sea of Thieves Xbox One X vs. PC graphics quality, equalized graphics settings, and framerate/frametime performance on the Xbox.
SoT is also one of the first really big multiplayer titles to be added to the “Xbox Play Anywhere Program.” That means that it’s playable on both Xbox One and Windows 10 with a single purchase (yes, it’s a Windows 10 exclusive DX11 game). Also, Xbox and PC players are free to encounter each other ingame or even party up together, with the only obvious downside being forced to interact with the Windows 10 store and Xbox app. Together, these two aspects make a PC vs Xbox a very interesting comparison, since any player that owns a PC and an Xbox could easily switch.
Final Fantasy XV recently released on PC, and given the attention we drew to the benchmark’s LOD and HairWorks issues, it’s only fair that we take a look at the finished product. Prior to the PC release, the best playable version of the game was the cracked Origin preload the Xbox One X version, so our baseline for this graphics comparison is the Xbox at 4K using the “high” preset.
To match our PC settings to the Xbox version, we first selected the default choice for every option, which got us 90% of the way there. That includes “Average” settings for Model LOD, Anisotropic Filtering, Lighting, Shadows, Ambient Occlusion, and Filtering. Assets (high-quality asset pack), Geomapping (ground tessellation), and all NVIDIA features were turned off, anti-aliasing was set to TAA, and motion blur was turned on. Although this wasn’t a performance test, we limited framerate to the Xbox’s cap of 30FPS for good measure, and set resolution scaling to 100% (since dynamic resolution isn’t available on PC). This is a pretty close approximation of what the Xbox is capable of, and it’s an encouraging sight--the Xbox’s “High” is the PC’s “Average” in almost every category.
Consoles don’t offer many upgrade paths, but HDDs, like the ones that ship in the Xbox One X, are one of the few items that can be exchanged for a standard part with higher performance. Since 2013, there have been quite a few benchmarks done with SSDs vs. HDDs in various SKUs of the Xbox One, but not so many with the Xbox One X--so we’re doing our own. We’ve seen some abysmal load times in Final Fantasy and some nasty texture loading in PUBG, so there’s definitely room for improvement somewhere.
The 1TB drive that was shipped in our Xbox One X is a Seagate 2.5” hard drive, model ST1000LM035. This is absolutely, positively a 5400RPM drive, as we said in our teardown, and not a 7200RPM drive (as some suggest online). Even taking the 140MB/s peak transfer rate listed in the drive’s data sheet completely at face value, it’s nowhere near bottlenecking on the internal SATA III interface. The SSD is up against SATA III (or USB 3.0 Gen1) limitations, but will still give us a theoretical sequential performance uplift of 4-5x -- and that’s assuming peak bursted speeds on the hard drive.
This benchmark tests game load times on an external SSD for the Xbox One X, versus internal HDD load times for Final Fantasy XV (FFXV), Monster Hunter World, PUBG (incl. texture pop-in), Assassin's Creed: Origins, and more.
PlayerUnknown’s Battlegrounds was officially released on PC this past December, but it’s been playable via Steam Early Access for nearly a year now. In all that time, none of us have played the game, despite many requests for benchmarks. Games that are in active development don’t make for easy testing, and neither do exclusively multiplayer games with tons of variance. Even Overwatch has the ability to play against bots.
Now that PUBG is 1.0 on PC and sort-of-released on Xbox, though, we have extra motivation to buckle down and start testing. We chose to start with the Xbox One X version, since the lack of graphics options makes things simpler. It’s listed as both 4K HDR ready and Xbox One X Enhanced, so our primary testing was done at 4K, with additional Xbox One X benchmarking at 1080p for PUBG. Technically, it’s a “Game Preview,” but the list of other titles in this category makes it look like something that was created expressly for PUBG. It also costs full PC price, $30.
This deep-dive looks at PUBG framerate and frametime performance (which is shockingly bad for a console), along with graphics analysis of the game’s visuals. Although the article covers testing and benchmarking in slightly more depth, we’d also strongly recommend watching the video, as it contains visual representation of what’s happening in-game.
To everyone’s confusion, a review copy of Dragon Ball FighterZ for Xbox One showed up in our mailbox a few days ago. We’ve worked with Bandai Namco in the past, but never on console games. They must have cast a wide net with review samples--and judging by the SteamCharts stats, it worked.
It’d take some digging through the site archives to confirm, but we might never have covered a real fighting game before. None of us play them, we’ve tapered off doing non-benchmark game reviews, and they generally aren’t demanding enough to be hardware testing candidates (recommended specs for FighterZ include a 2GB GTX 660). For the latter reason, it’s a good thing they sent us the Xbox version. It’s “Xbox One X Enhanced,” but not officially listed as 4K, although that’s hard to tell at a glance: the resolution it outputs on a 4K display is well above 1080p, and the clear, bold lines of the cel-shaded art style make it practically indistinguishable from native 4K even during gameplay. Digital Foundry claims it’s 3264 x 1836 pixels, or 85% of 4K in height/width.
Today, we’re using Dragon Ball FighterZ to test our new console benchmarking tools, and further iterate upon them for -- frankly -- bigger future launches. This will enable us to run console vs. PC testing in greater depth going forward.
PC versus console is an ancient debate, long discussed by the wisest and most scholarly of YouTube commenters. PCs are described as expensive, bulky, and difficult to assemble or work with, while consoles are called underpowered, underperforming systems that hold game development back for the duration of each generation. The pro-console responses to our first Xbox One X tests usually boiled down to: “it’s still better than a $500 PC.”
It’s a reasonable argument, and it’s the basis on which consoles are sold these days. By popular demand*, then, we’ve built a $500 PC to compare to the Xbox One X (list price: $500) in performance. We tested whether the 4K-capable Xbox One X is “better” than an equivalently priced PC, judging by framerates in two of the Xbox’s first batch of 4K-enabled games, Destiny 2 and Assassin’s Creed: Origins.
Given the recent insanoland surge in RAM and GPU prices, the argument is more poignant than ever. DIY PCs stand to lose marketshare if people can’t afford to build a cheap machine, and so we thought we’d use our new in-house software to benchmark a low-end PC and an Xbox One X.
Testing the Xbox One X for frametime and framerate performance marks an exciting step for GamersNexus. This is the first time we’ve been able to benchmark console frame pacing, and we’re doing so by deploying new, in-house software for analysis of lossless gameplay captures. At a very top-level, we’re analyzing the pixels temporally, aiming to determine whether there’s a change between frames. We then do some checks to validate those numbers, then some additional computational work to compute framerates and frametimes. That’s the simplest, most condensed version of what we’re doing. Our Xbox One X tear-down set the stage for this.
Outside of this, additional testing includes K-type thermocouple measurements from behind the APU (rear-side of the PCB), with more measurements from a logging plugload meter. The end result is an amalgamation of three charts, combining to provide a somewhat full picture of the Xbox One X’s gaming performance. As an aside, note that we discovered an effective Tcase Max of ~85C on the silicon surface, at which point the console shuts down. We were unable to force a shutdown during typical gameplay, but could achieve a shutdown with intentional torture of the APU thermals.
The Xbox One X uses an AMD Jaguar APU, which combines 40 CUs (4 more than an RX 480/580) at 1172MHz (~168MHz slower than an RX 580 Gaming X). The CPU component is an 8C processor (no SMT), and is the same as on previous Xbox One devices, just with a higher frequency of 2.3GHz. As for memory, the device is using 12GB of GDDR5 memory, all shared between the CPU and GPU. The memory operates an actual memory speed of 1700MHz, with memory bandwidth at 326GB/s. For point of comparison, an RX 580 offers about 256GB/s bandwidth. The Xbox One X, by all accounts, is an impressive combination of hardware that functionally equates a mid-range gaming PC. The PSU is another indication of this, with a 245W supply, at least a few watts of which are provided to the aggressive cooling solution (using a ~112mm radial fan).
Microsoft has, rather surprisingly, made it easy to get into and maintain the Xbox One X. The refreshed console uses just two screws to secure the chassis – two opposing, plastic jackets for the inner frame – and then uses serial numbering to identify the order of parts removal. For a console, we think the Xbox One X’s modularity of design is brilliant and, even if it’s just for Microsoft’s internal RMA purposes, it makes things easier for the enthusiast audience to maintain. We pulled apart the new Xbox One X in our disassembly process, walking through the VRM, APU, cooling solution, and overall construction of the unit.
Before diving in, a note on the specs: The Xbox One X uses an AMD Jaguar APU, to which is affixed an AMD Polaris GPU with 40 CUs. This CU count is greater than the RX 580’s 36 CUs (and so yields 2560 SPs vs. 2304 SPs), but runs at a lower clock speed. Enter our errata from the video: The clock speed of the integrated Polaris GPU in the Xbox One X is purportedly 1172MHz (some early claims indicated 1720MHz, but that proved to be the memory speed); at 1172MHz, the integrated Polaris GPU is about 100MHz slower than the original reference Boost of the RX 480, or about 168MHz slower than some of the RX 580 partner models. Consider this a correction of those numbers – we ended up citing the 1700MHz figure in the video, but that is actually incorrect; the correct figure is 1172MHz core, 1700MHz memory (6800MHz effective). The memory operates a 326GB/s bandwidth on its 384-bit bus. As for the rest, 40 CUs means 160 TMUs, giving a texture fill-rate of 188GT/s.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.