This content piece was highly requested by the audience, although there is presently limited point to its findings. Following the confluence of the Meltdown and Spectre exploits last week, Microsoft pushed a Windows security software update that sought to fill some of the security gaps, something which has been speculated as causing a performance dip between 5% and 30%. As of now, today, Intel has not yet released its microcode update, which means that it is largely folly to undertake the benchmarks we’re undertaking in this content piece – that said, there is merit to it, but the task must be looked at from the right perspective.

From the perspective of advancing knowledge and building a baseline for the next round of tests – those which will, unlike today’s, factor-in microcode patches – we must eventually run the tests being run today. This will give us a baseline for performance, and will grant us two critical opportunities: (1) We may benchmark baseline, per-Windows-patch performance, and (2) we can benchmark post-patch performance, pre-microcode. Both will allow us to see the isolated impact from Intel’s firmware update versus Microsoft’s software update. This is important, and alone makes the endeavor worthwhile – particularly because our CPU suite is automated, anyway, so no big time loss, despite CES looming.

Speaking of, we only had time to run one CPU through the suite, and only with a few games, as, again, CES is looming. This is enough for now, though, and should sate some demand and interest.

Microsoft has, rather surprisingly, made it easy to get into and maintain the Xbox One X. The refreshed console uses just two screws to secure the chassis – two opposing, plastic jackets for the inner frame – and then uses serial numbering to identify the order of parts removal. For a console, we think the Xbox One X’s modularity of design is brilliant and, even if it’s just for Microsoft’s internal RMA purposes, it makes things easier for the enthusiast audience to maintain. We pulled apart the new Xbox One X in our disassembly process, walking through the VRM, APU, cooling solution, and overall construction of the unit.

Before diving in, a note on the specs: The Xbox One X uses an AMD Jaguar APU, to which is affixed an AMD Polaris GPU with 40 CUs. This CU count is greater than the RX 580’s 36 CUs (and so yields 2560 SPs vs. 2304 SPs), but runs at a lower clock speed. Enter our errata from the video: The clock speed of the integrated Polaris GPU in the Xbox One X is purportedly 1172MHz (some early claims indicated 1720MHz, but that proved to be the memory speed); at 1172MHz, the integrated Polaris GPU is about 100MHz slower than the original reference Boost of the RX 480, or about 168MHz slower than some of the RX 580 partner models. Consider this a correction of those numbers – we ended up citing the 1700MHz figure in the video, but that is actually incorrect; the correct figure is 1172MHz core, 1700MHz memory (6800MHz effective). The memory operates a 326GB/s bandwidth on its 384-bit bus. As for the rest, 40 CUs means 160 TMUs, giving a texture fill-rate of 188GT/s.

Our hardware news round-up for the past week is live, detailing some behind-the-scenes / early information on our thermal and power testing for the i9-7900X, the Xbox One X hardware specs, Threadripper's release date, and plenty of other news. Additional coverage includes final word on Acer's 21X Predator, Samsung's 64-layer NAND finalization, Global Foundries' 7nm FinFET for 2018, and some extras.

We anticipate a slower news week for non-Intel/non-AMD entities this week, as Intel launched X299/SKY-X and AMD is making waves with Epyc. Given the command both these companies have over consumer news, it's likely that other vendors will hold further press releases until next week.

Find the show notes below, written by Eric Hamilton, along with the embedded video.

The right-to-repair bills (otherwise known as “Fair Repair”) that are making their way across a few different states are facing staunch opposition from The Entertainment Software Association, a trade organization including Sony, Microsoft, Nintendo as well as many video game developers and publishers. The proposed legislation would not only make it easier for consumers to fix consoles, but electronics in general, including cell phones. Bills have been introduced in Nebraska, Minnesota, New York, Massachusetts, and Kansas. Currently, the bill is the furthest along in Nebraska where the ESA have concentrated lobbying efforts.

Console makers have been a notable enemy of aftermarket repair, but they are far from alone; both Apple and John Deere have vehemently opposed this kind of legislation. In a letter to the Copyright Office, John Deere asserted—among other spectacular delusions, like owners only have an implied license to operate the tractor—that allowing owners to repair, tinker with, or modify their tractors would “make it possible for pirates, third-party developers, and less innovative competitors to free-ride off the creativity, unique expression and ingenuity of vehicle software.”

This issue has been driving us crazy for weeks. All of our test machines connect to shared drives on central terminal (which has Windows 10 installed). As tests are completed, we launch a Windows Explorer tab (file explorer) and navigate to \\COMPUTER-NAME\data to drop our results into the system. This setup is used for rapid file sharing across gigabit internal lines, rather than going through cumbersome USB keys or bloating our NAS with small test files.

Unfortunately, updating our primary test benches to Windows 10 Anniversary Edition broke this functionality. We’d normally enter \\COMPUTER-NAME\data to access the shared drive over the network, but that started returning an “incorrect username or password” error (despite using the correct username and password) after said Win10 update. The issue was worked around for a few weeks, but it finally became annoying enough to require some quick research.

Our full OCAT content piece is still pending publication, as we ran into some blocking issues when working with AMD’s OCAT benchmarking utility. In speaking with the AMD team, those are being worked-out behind the scenes for this pre-release software, and are still being actively documented. For now, we decided to push a quick overview of OCAT, what it does, and how the tool will theoretically make it easier for all users to perform Dx12 & Vulkan benchmarks going forward. We’ll revisit with a performance and overhead analysis once the tool works out some of its bugs.

The basics, then: AMD has only built the interface and overlay here, and uses the existing, open source Intel+Microsoft amalgam of PresentMon to perform the hooking and performance interception. We’ve already been detailing PresentMon in our benchmarking methods for a few months now, using PresentMon monitoring low-level API performance and using Python and Perl scripts built by GN for data analysis. That’s the thing, though – PresentMon isn’t necessarily easy to understand, and our model of usage revolves entirely around command line. We’re using the preset commands established by the tool’s developers, then crunching data with spreadsheets and scripts. That’s not user-friendly for a casual audience.

Just to deploy the tool, Visual Studio package requirements and a rudimentary understanding of CMD – while not hard to figure out – mean that it’s not exactly fit to offer easy benchmarking for users. And even for technical media, an out-of-box PresentMon isn’t exactly the fastest tool to work with.

The Coalition's Gears of War 4 demonstrated the capabilities of nVidia's new GTX 1070-enabled notebooks, operating at 4K with fully maxed-out graphics options. View our Pascal notebook article for more information on the specifics of the hardware. While at the event in England, we took notes of the game's complete graphics settings and some notes on graphics setting impact on the GPU and CPU. The Coalition may roll-out additional settings by the game's October launch.

We tested Gears of War 4 on the new MSI GT73 notebook with 120Hz display and a GTX 1070 (non-M) GPU. The notebook was capable of pushing maxed settings at 1080p and, a few pre-release bugs aside (pre-production hardware and an unfinished game), gameplay ran in excess of 60FPS.

We've got an early look at Gears of War 4's known graphics settings, elevated framerate, async compute, and dynamic resolution support. Note that the Gears team has promised “more than 30 graphics settings,” so we'll likely see a few more in the finished product. Here are our photos of the graphics options menu:

Windows 10 games distribution platform UWP has previously forced V-Sync onto users, but has become toggleable for Gears of War 4, Mark Reyner told Eurogamer. Among other technical changes, Gears of War 4 appears to be shaping up to be a proper benchmark title for our future GPU reviews. The game will host a benchmark mode – always a plus – while unlocking the framerate and adding super-resolution support. That means, like Shadow of Mordor and similar games, players will be able to run the game at whatever resolution they want. It’s similar to DSR/VSR in that the game renders at the higher resolution, then down scales to fit the display. This results in greater pixel density and increases clarity.

Steam's hardware survey reports a +1.57% increase month-over-month in Windows 10 64-bit adoption, marking a growth trend favoring the move to DirectX 12. Presently, the major Dx12-ready titles include Rise of the Tomb Raider, Hitman, Ashes of the Singularity, and forthcoming Total War: Warhammer; you can learn about Warhammer's unique game engine technology over here.

In Steam's survey, Windows 7 is broken into just “Windows 7” and “Windows 7 64-bit,” the two totaling 41.43% of the users responding to the optional survey. The survey also breaks Windows 10 into a “64-bit” and an unspecified version, totaling 41.4% (or 40.01% for the specific 64-bit line-item).

Tabulated results are below:

5MB of storage once required 50 spinning platters and a dedicated computer, demanding a 16 square-foot area for its residence. The first hard drive wasn't particularly fast at 1200RPM and with seek latencies through the roof (imagine a header seeking between 50 platters) – but it was the most advanced storage of the time.

That device was the IBM 305 RAMAC, its converted cost was a $30,000 monthly lease, and single instruction execution required between 30ms and 50ms (IRW phases). The IBM 305 RAMAC did roughly 100,000 bits per second, or 0.0125MB/s. Today, the average 128GB microSD card costs ~$50 – one time – and executes read/write instructions at 671,000,000 bits per second, or 80MB/s. And this is one of our slowest forms of Flash storage. The microSD card is roughly the size of a fingernail (32x24x2.1mm), and filling a 16 square-foot area with them would yield terabytes upon terabytes of storage.


The 305 RAMAC was a creation of 1956. Following last week's GTC conference, we had the opportunity to see the RAMAC and other early computing creations at the Computer History Museum in Mountain View, California. The museum encompasses most of computing history, including the abacus, early Texas Instruments advanced calculators (like the TI-99), and previously housed a mechanical Babbage Machine computer from the 1800s. In our recent tour of the Computer History Museum, we focused on the predecessors to modern computing – the first hard drive, first supercomputers, first transistorized computers, mercury and core memory, and vacuum tube computing.

Page 1 of 3

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge