Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"
First world problems, Steve. First world problems.
This content marks the beginning of our in-depth VR testing efforts, part of an ongoing test pattern that hopes to determine distinct advantages and disadvantages on today’s hardware. VR hasn’t been a high-performance content topic for us, but we believe it’s an important one for this release of Kaby Lake & Ryzen CPUs: Both brands have boasted high VR performance, “VR Ready” tags, and other marketing that hasn’t been validated – mostly because it’s hard to do so. We’re leveraging a hardware capture rig to intercept frames to the headsets, FCAT VR, and a suite of five games across the Oculus Rift & HTC Vive to benchmark the R7 1700 vs. i7-7700K. This testing includes benchmarks at stock and overclocked configurations, totaling four devices under test (DUT) across two headsets and five games. Although this is “just” 20 total tests (with multiple passes), the process takes significantly longer than testing our entire suite of GPUs. Executing 20 of these VR benchmarks, ignoring parity tests, takes several days. We could do the same count for a GPU suite and have it done in a day.
VR benchmarking is hard, as it turns out, and there are a number of imperfections in any existing test methodology for VR. We’ve got a good solution to testing that has proven reliable, but in no way do we claim that perfect. Fortunately, by combining hardware and software capture, we’re able to validate numbers for each test pass. Using multiple test passes over the past five months of working with FCAT VR, we’ve also been able to build-up a database that gives us a clear margin of error; to this end, we’ve added error bars to the bar graphs to help illustrate when results are within usual variance.
On the heels of the media world referring to the Titan X (Pascal) as Titan XP – mostly to reduce confusion versus the previous Titan X – nVidia today announced its actual Titan Xp (lowercase ‘p,’ very important) successor to the Titan XP. Lest Titan X, Titan X, and Titan X be too confusing, we’ll be referring to these as Titan XM [Maxwell], Titan X (Pascal), and Titan Xp. We really should apologize to Nintendo for making fun of their naming scheme, as nVidia seems to now be in competition; next, we’ll have the New Titan Xp (early 2017).
Someone at nVidia is giddy over taking the world’s Titan XP name and changing it, we’re sure.
This first revisit to Ryzen’s performance comes earlier than most, given the tempestuous environment surrounding AMD’s latest uarch. In the past weeks, we’ve seen claims that Windows updates promise a significant boon to Ryzen performance, as has also been said of memory overclocking, and we were previously instructed that EFI updates alone should bolster performance. Perhaps not unrelated, game updates to major titles could have potentially impacted performance, amounting to a significant number of variables for a revisit.
Today’s content piece aims to isolate each of these items as much as reasonable – not all can be isolated, like game updates – to better determine the performance impact from the individual changes and updates. We’ll then progress cumulatively through charts as updates are applied. Our final set of charts will contain Windows version bxxx.970, version 1002 EFI on the CH6, and memory overclocking efforts.
This 47th episode of Ask GN features questions pertaining to test execution and planning for multitasking benchmarks, GPU binning, Ryzen CPU binning, X300 mITX board availability, and more. We provide some insights as to plans for near-future content, like our impending i7-2600K revisit, and quote a few industry sources who answered questions in this week's episode.
Of note, we worked with VSG of Thermal Bench to talk heatpipe size vs. heatpipe count, then spoke with EVGA and ASUS about their GPU allocation and pretesting processes (popularly, "binning," though not quite the same). Find the new episode below, with timestamps to follow the embed:
Benchmarking Mass Effect: Andromeda immediately revealed a few considerations for our finalized testing. Frametimes, for instance, were markedly lower on the first test pass. The game also prides itself in casting players into a variety of environs, including ship interiors, planet surfaces of varying geometric complexity (generally simpler), and space stations with high poly density. Given all these gameplay options, we prefaced our final benchmarking with an extensive study period to research the game’s performance in various areas, then determine which area best represented the whole experience.
Our Mass Effect: Andromeda benchmark starts with definitions of settings (like framebuffer format), then goes through research, then the final benchmarks at 4K, 1440p, and 1080p.