Hardware Guides

A show floor crawling with tens of thousands of people is an interesting environment for a PC tear-down – certainly more chaotic than in our labs. Still, whenever we've got an opportunity to take something apart during an unveil, we take it. MSI's recently unveiled Aegis (video below) fancies itself a barebones machine that borders on a display unit, mounted atop a power-supply enshrining pedestal that resembles Hermes' winged shoes.

While at PAX East, we weaponized our camera toolkit to disassemble MSI's Aegis barebones gaming PC, which includes a custom case, motherboard, and unique CPU cooler. Side panels came off, the video card was removed, and we more closely examined the custom cooler that MSI's packed into its compact enclosure.

To Broadwell-E or not to Broadwell-E. That is the question!

If you're an enthusiast and that Nehalem or Sandy Bridge setup you built years ago is ready for a replacement, you might be considering an X99 motherboard build. The operative question then becomes, "should I wait for Broadwell-E or just buy Haswell-E and be done with it?" After a weekend at PAX East talking to several SIs, Intel employees, and all the other hardware vendors, we were able to get a few bits and pieces of information that may help you make your decision, but first, let's look at some numbers.

One of our most commonly received Ask GN questions is “which video card manufacturer is 'the best?'” (scare quotes added). The truth of the matter is, as we've often said, they're all similar in the most critical matter – the GPU is the same. If MSI sells an R9 380X and PowerColor sells an R9 380X, they're both using the same GPU (Tonga) and silicon; core performance will be nearly identical. The same is true for the GTX cards – EVGA and PNY both sell GTX 960 video cards, and all of their models implement the same GM206 GPU. The differences are generally rooted in pre-overclocking, cooling units, support and warranties, and aesthetics.

All our content combined, we've spent hours and tens of thousands of words talking about which video cards perform the best in various categories. That's great -- but sometimes it's fun to do something different. This video allows each GPU manufacturer one minute to explain who makes the best graphics cards for gaming. It's a speed-round, to be sure.

U.2 (pronounced Udot2, lest Bono exercise legal force) has made a major appearance on PC platform updates from motherboard vendors, including Gigabyte with new X99 and Z170X motherboards at PAX. The form factor used to be called SFF-8639 (SSD Form Factor) and was targeted almost entirely at server and enterprise markets. In a move toward greater user-friendliness, the interface has rebranded as “U.2,” easier to remember with the M.2 interface also proliferating across the market.

This “TLDR” article explains the U.2 vs. M.2 vs. SATA Express differences, with a focus on PCI-e lane assignment and speeds or throughputs.

"VR is a fad" was the pull-quote which propagated through the internet when Warren Spector made the comment last year, reinforcing it at ECGC a few days ago. The veteran designer indicated a belief that virtual reality could generate "interest among hardcore gamers," but remained cautious to grant too much early praise given personal experience with earlier VR attempts. Spector's decades-long industry experience grants weight to the statement, and made us curious what some long-time colleagues of Spector's might believe. Richard Garriott is one of those – friend and former employer of Spector – and has previously spoken to us about a history of effectively inventing MMOs, new graphics techniques, and more.

Richard Garriott joined us at PAX East 2016 for an impromptu discussion on the viability of virtual reality. The conversation started as small talk – "what do you think of VR?" – but evolved into an in-depth look at the challenges faced by the emergent technology. We rolled with it; you can find the video and some of the transcript below:

5MB of storage once required 50 spinning platters and a dedicated computer, demanding a 16 square-foot area for its residence. The first hard drive wasn't particularly fast at 1200RPM and with seek latencies through the roof (imagine a header seeking between 50 platters) – but it was the most advanced storage of the time.

That device was the IBM 305 RAMAC, its converted cost was a $30,000 monthly lease, and single instruction execution required between 30ms and 50ms (IRW phases). The IBM 305 RAMAC did roughly 100,000 bits per second, or 0.0125MB/s. Today, the average 128GB microSD card costs ~$50 – one time – and executes read/write instructions at 671,000,000 bits per second, or 80MB/s. And this is one of our slowest forms of Flash storage. The microSD card is roughly the size of a fingernail (32x24x2.1mm), and filling a 16 square-foot area with them would yield terabytes upon terabytes of storage.

chm-ramac-tall-1

The 305 RAMAC was a creation of 1956. Following last week's GTC conference, we had the opportunity to see the RAMAC and other early computing creations at the Computer History Museum in Mountain View, California. The museum encompasses most of computing history, including the abacus, early Texas Instruments advanced calculators (like the TI-99), and previously housed a mechanical Babbage Machine computer from the 1800s. In our recent tour of the Computer History Museum, we focused on the predecessors to modern computing – the first hard drive, first supercomputers, first transistorized computers, mercury and core memory, and vacuum tube computing.

Stutter as a result of V-Sync (which was made to fix screen tearing -- another problem) has been a consistent nuisance in PC gaming since its inception. We’ve talked about how screen-tearing and stutter interact here.

Despite the fact that FPS in games can fluctuate dramatically, monitors have been stuck using a fixed refresh rate. Then nVidia’s G-Sync cropped-up. G-Sync was the first way to eliminate both stutter and screen-tearing on desktop PCs by controlling FPS-refresh fluctuations. Quickly after nVidia showed off G-Sync, AMD released their competing technology: FreeSync. G-Sync and FreeSync are the only adaptive refresh rate technologies currently available to consumers on large.

It takes our technicians minutes to build a computer these days – a learned skill – but even that first-time build is completable within a span of hours. Cable management and “environment setup” (OS, software) generally take the longest, but the build process is surprisingly trivial. Almost anyone can build a computer. The DIY approach saves money and feels rewarding, but also prepares system owners for future troubleshooting and builds a useful, technical skillset.

Parts selection can be initially intimidating and late-night troubleshooting sometimes proves frustrating; the between process, though, the actual assembly – that's easy. A few screws, some sockets that live under the “if it doesn't fit, don't force it” mantra, and a handful of cables.

This “How to Build a Gaming Computer” guide offers a step-by-step tutorial for PC part selection, compatibility checking, assembly, and basic troubleshooting resources. The goal of this guide is to educate the correct steps to the entire process: we won't be giving you tools that automatically pick parts based on compatibility, here; no, our goal is to teach the why and the how of PC building. You'll be capable of picking compatible parts and assembling builds fully independently after completing this walkthrough.

GDC 2016 marks further advancement in game graphics technology, including a somewhat uniform platform update across the big three major game engines. That'd be CryEngine (now updated to version V), Unreal Engine, and Unity, of course, all synchronously pushing improved game fidelity. We were able to speak with nVidia to get in-depth and hands-on with some of the industry's newest gains in video game graphics, particularly involving voxel-accelerated ambient occlusion, frustum tracing, and volumetric lighting. Anyone who's gained from our graphics optimization guides for Black Ops III, the Witcher, and GTA V should hopefully enjoy new game graphics knowledge from this post.

The major updates come down the pipe through nVidia's GameWorks SDK version 3.1 update, which is being pushed to developers and engines in the immediate future. NVidia's GameWorks team is announcing five new technologies at GDC:

  • Volumetric Lighting algorithm update

  • Voxel-Accelerated Ambient Occlusion (VXAO)

  • High-Fidelity Frustum-Traced Shadows (HFTS)

  • Flow (combustible fluid, fire, smoke, dynamic grid simulator, and rendering in Dx11/12)

  • GPU Rigid Body tech

This article introduces the new technologies and explains how, at a low-level, VXAO (voxel-accelerated ambient occlusion), HFTS (high-fidelity frustum-traced shadows), volumetric lighting, Flow (CFD), and rigid bodies work.

Readers interested in this technology may also find AMD's HDR display demo a worthy look.

Before digging in, our thanks to nVidia's Rev Lebaredian for his patient, engineering-level explanation of these technologies.

We welcomed AMD's Scott Wasson on-camera at the company's Capsaicin event, where we also spoke to Roy Taylor about driver criticism and covered roadmap updates. Wasson was eager to discuss new display technology demonstrated at the event and highlighted a critical shift toward greater color depth and vibrancy. We saw early samples of HDR screens at CES, but the Capsaicin display was far more advanced.

But that's not all we spoke about. As a site which prides itself on testing frame delivery consistency (we call them “low frametimes” – 1% and 0.1% lows), it made perfect sense to speak with frametime testing pioneer Scott Wasson about the importance of this metric.

For the few unaware, Wasson founded the Tech Report and worked as the site's Editor-in-Chief up until January, at which time he departed as EIC and made a move to AMD. Wasson helped pioneer “frametime testing,” detailed in his “Inside the Second” article, and we'd strongly recommend a read.

Page 1 of 10

  VigLink badge