Hardware Guides

U.2 (pronounced Udot2, lest Bono exercise legal force) has made a major appearance on PC platform updates from motherboard vendors, including Gigabyte with new X99 and Z170X motherboards at PAX. The form factor used to be called SFF-8639 (SSD Form Factor) and was targeted almost entirely at server and enterprise markets. In a move toward greater user-friendliness, the interface has rebranded as “U.2,” easier to remember with the M.2 interface also proliferating across the market.

This “TLDR” article explains the U.2 vs. M.2 vs. SATA Express differences, with a focus on PCI-e lane assignment and speeds or throughputs.

"VR is a fad" was the pull-quote which propagated through the internet when Warren Spector made the comment last year, reinforcing it at ECGC a few days ago. The veteran designer indicated a belief that virtual reality could generate "interest among hardcore gamers," but remained cautious to grant too much early praise given personal experience with earlier VR attempts. Spector's decades-long industry experience grants weight to the statement, and made us curious what some long-time colleagues of Spector's might believe. Richard Garriott is one of those – friend and former employer of Spector – and has previously spoken to us about a history of effectively inventing MMOs, new graphics techniques, and more.

Richard Garriott joined us at PAX East 2016 for an impromptu discussion on the viability of virtual reality. The conversation started as small talk – "what do you think of VR?" – but evolved into an in-depth look at the challenges faced by the emergent technology. We rolled with it; you can find the video and some of the transcript below:

5MB of storage once required 50 spinning platters and a dedicated computer, demanding a 16 square-foot area for its residence. The first hard drive wasn't particularly fast at 1200RPM and with seek latencies through the roof (imagine a header seeking between 50 platters) – but it was the most advanced storage of the time.

That device was the IBM 305 RAMAC, its converted cost was a $30,000 monthly lease, and single instruction execution required between 30ms and 50ms (IRW phases). The IBM 305 RAMAC did roughly 100,000 bits per second, or 0.0125MB/s. Today, the average 128GB microSD card costs ~$50 – one time – and executes read/write instructions at 671,000,000 bits per second, or 80MB/s. And this is one of our slowest forms of Flash storage. The microSD card is roughly the size of a fingernail (32x24x2.1mm), and filling a 16 square-foot area with them would yield terabytes upon terabytes of storage.

chm-ramac-tall-1

The 305 RAMAC was a creation of 1956. Following last week's GTC conference, we had the opportunity to see the RAMAC and other early computing creations at the Computer History Museum in Mountain View, California. The museum encompasses most of computing history, including the abacus, early Texas Instruments advanced calculators (like the TI-99), and previously housed a mechanical Babbage Machine computer from the 1800s. In our recent tour of the Computer History Museum, we focused on the predecessors to modern computing – the first hard drive, first supercomputers, first transistorized computers, mercury and core memory, and vacuum tube computing.

Stutter as a result of V-Sync (which was made to fix screen tearing -- another problem) has been a consistent nuisance in PC gaming since its inception. We’ve talked about how screen-tearing and stutter interact here.

Despite the fact that FPS in games can fluctuate dramatically, monitors have been stuck using a fixed refresh rate. Then nVidia’s G-Sync cropped-up. G-Sync was the first way to eliminate both stutter and screen-tearing on desktop PCs by controlling FPS-refresh fluctuations. Quickly after nVidia showed off G-Sync, AMD released their competing technology: FreeSync. G-Sync and FreeSync are the only adaptive refresh rate technologies currently available to consumers on large.

It takes our technicians minutes to build a computer these days – a learned skill – but even that first-time build is completable within a span of hours. Cable management and “environment setup” (OS, software) generally take the longest, but the build process is surprisingly trivial. Almost anyone can build a computer. The DIY approach saves money and feels rewarding, but also prepares system owners for future troubleshooting and builds a useful, technical skillset.

Parts selection can be initially intimidating and late-night troubleshooting sometimes proves frustrating; the between process, though, the actual assembly – that's easy. A few screws, some sockets that live under the “if it doesn't fit, don't force it” mantra, and a handful of cables.

This “How to Build a Gaming Computer” guide offers a step-by-step tutorial for PC part selection, compatibility checking, assembly, and basic troubleshooting resources. The goal of this guide is to educate the correct steps to the entire process: we won't be giving you tools that automatically pick parts based on compatibility, here; no, our goal is to teach the why and the how of PC building. You'll be capable of picking compatible parts and assembling builds fully independently after completing this walkthrough.

GDC 2016 marks further advancement in game graphics technology, including a somewhat uniform platform update across the big three major game engines. That'd be CryEngine (now updated to version V), Unreal Engine, and Unity, of course, all synchronously pushing improved game fidelity. We were able to speak with nVidia to get in-depth and hands-on with some of the industry's newest gains in video game graphics, particularly involving voxel-accelerated ambient occlusion, frustum tracing, and volumetric lighting. Anyone who's gained from our graphics optimization guides for Black Ops III, the Witcher, and GTA V should hopefully enjoy new game graphics knowledge from this post.

The major updates come down the pipe through nVidia's GameWorks SDK version 3.1 update, which is being pushed to developers and engines in the immediate future. NVidia's GameWorks team is announcing five new technologies at GDC:

  • Volumetric Lighting algorithm update

  • Voxel-Accelerated Ambient Occlusion (VXAO)

  • High-Fidelity Frustum-Traced Shadows (HFTS)

  • Flow (combustible fluid, fire, smoke, dynamic grid simulator, and rendering in Dx11/12)

  • GPU Rigid Body tech

This article introduces the new technologies and explains how, at a low-level, VXAO (voxel-accelerated ambient occlusion), HFTS (high-fidelity frustum-traced shadows), volumetric lighting, Flow (CFD), and rigid bodies work.

Readers interested in this technology may also find AMD's HDR display demo a worthy look.

Before digging in, our thanks to nVidia's Rev Lebaredian for his patient, engineering-level explanation of these technologies.

We welcomed AMD's Scott Wasson on-camera at the company's Capsaicin event, where we also spoke to Roy Taylor about driver criticism and covered roadmap updates. Wasson was eager to discuss new display technology demonstrated at the event and highlighted a critical shift toward greater color depth and vibrancy. We saw early samples of HDR screens at CES, but the Capsaicin display was far more advanced.

But that's not all we spoke about. As a site which prides itself on testing frame delivery consistency (we call them “low frametimes” – 1% and 0.1% lows), it made perfect sense to speak with frametime testing pioneer Scott Wasson about the importance of this metric.

For the few unaware, Wasson founded the Tech Report and worked as the site's Editor-in-Chief up until January, at which time he departed as EIC and made a move to AMD. Wasson helped pioneer “frametime testing,” detailed in his “Inside the Second” article, and we'd strongly recommend a read.

We recently proved the viability of UltraWide (21:9) monitors on an equally ultra-wide range of video cards, with performance benchmarking indicative of playability on lower-end hardware than might be expected. 21:9 resolution displays have seen a resurgence of interest lately with slowly dropping prices at the low-end and increasing GPU performance.

We've lately used the ~$1300 Acer Predator X34 and ~$540 Samsung 29” Ultra-wide; following more hands-on gaming experience, it made sense to address a new question: Do UltraWide (21:9) monitors give an advantage in gaming? We'll be looking at this topic from two angles – the competitive and the immersive aspects – using a 2560x1080 and 3440x1440 set of UltraWide displays.

It's been a few months since our “Ask GN” series had its last installment. We got eleven episodes deep, then proceeded to plunge into the non-stop game testing and benchmarking of the fourth quarter. Alas, following fan requests and interest, we've proudly resurrected the series – not the only thing resurrected this week, either.

So, amidst Games for Windows Live and RollerCoaster Tycoon's re-re-announcement of mod support, we figured we'd brighten the week with something more promising: DirectX & Vulkan cherry-picked topics, classic GPU battles, and power supply testing questions. There's a bonus question at the end, too.

Thermal testing for cases, coolers, CPUs, and GPUs requires very careful attention to methodology and test execution. Without proper controls for ambient or other variables within a lab/room environment, it's exceedingly easy for tests to vary to a degree that effectively invalidates the results. Cases and coolers are often fighting over one degree (Celsius) or less of separation, so having strict tolerances for ambient and active measurements of diodes and air at intake/exhaust helps ensure accurate data.

We recently put our methodology to the test by borrowing time on a local thermal chamber – a controlled environment – and checking our delta measurements against it. GN's thermal testing is conducted in a lab on an open-loop HVAC system; we measure ambient constantly (second-to-second) with thermocouples, then subtract those readings from diode readings to create a delta value. For the thermal chamber, we performed identical methodology within a more tightly controlled environment. The goal was to determine if the delta value (within the chamber) paralleled the delta value achieved in our own (open air) labs, within reasonable margin of error; if so, we'd know our testing is fully accurate and properly accounts for ambient and other variables.

The chamber used has climate control functions that include temperature settings. We set the chamber to match our normal lab temps (20C), then checked carefully for where the intake and exhaust are setup within the chamber. This particular unit has slow, steady intake from the top that helps circulate air by pushing it down to an exhaust vent at the bottom. It'd just turn into an oven, otherwise, as the system's rising temps would increase ambient. This still happens to some degree, but a control module on the thermal chamber helps adjust and regulate the 20C target as the internal temperature demands. It's the control module which is the most expensive, too; our chaperone told us that the units cost upwards of $10,000 – and that's for a 'budget-friendly' approach.

 

  VigLink badge