This content marks the beginning of our in-depth VR testing efforts, part of an ongoing test pattern that hopes to determine distinct advantages and disadvantages on today’s hardware. VR hasn’t been a high-performance content topic for us, but we believe it’s an important one for this release of Kaby Lake & Ryzen CPUs: Both brands have boasted high VR performance, “VR Ready” tags, and other marketing that hasn’t been validated – mostly because it’s hard to do so. We’re leveraging a hardware capture rig to intercept frames to the headsets, FCAT VR, and a suite of five games across the Oculus Rift & HTC Vive to benchmark the R7 1700 vs. i7-7700K. This testing includes benchmarks at stock and overclocked configurations, totaling four devices under test (DUT) across two headsets and five games. Although this is “just” 20 total tests (with multiple passes), the process takes significantly longer than testing our entire suite of GPUs. Executing 20 of these VR benchmarks, ignoring parity tests, takes several days. We could do the same count for a GPU suite and have it done in a day.
VR benchmarking is hard, as it turns out, and there are a number of imperfections in any existing test methodology for VR. We’ve got a good solution to testing that has proven reliable, but in no way do we claim that perfect. Fortunately, by combining hardware and software capture, we’re able to validate numbers for each test pass. Using multiple test passes over the past five months of working with FCAT VR, we’ve also been able to build-up a database that gives us a clear margin of error; to this end, we’ve added error bars to the bar graphs to help illustrate when results are within usual variance.
Not long ago, we opened discussion about AMD’s new OCAT tool, a software overhaul of PresentMon that we had beta tested for AMD pre-launch. In the interim, and for the past five or so months, we’ve also been silently testing a new version of FCAT that adds functionality for VR benchmarking. This benchmark suite tackles the significant challenges of intercepting VR performance data, further offering new means of analyzing warp misses and drop frames. Finally, after several months of testing, we can talk about the new FCAT VR hardware and software capture utilities.
This tool functions in two pieces: Software and hardware capture.
This is a test that's been put through the paces for just about every generation of PCI Express, and it's worth refreshing now that the newest line of high-end GPUs has hit the market. The curiosity is this: Will a GPU be bottlenecked by PCI-e 3.0 x8, and how much impact does PCI-e 3.0 x16 have on performance?
We decided to test that question for internal research, but ended up putting together a small report for publication.
Futuremark has pushed an update to its popular 3DMark benchmarking software, now adding a proper "stress test" mode to the tool. Previously, the closest option 3DMark offered to a stress test was a looped FireStrike run, which has two issues we've pointed-out in some methodology discussion: (1) The loop-back is interrupted by a brief black screen and restart of the bench, and (2) the test run does not equally load the GPU, and so power draw fluctuates (we saw a range of ~40W on the 1080). Neither of these is ideal for real burn-in testing.
The new Stress Test benchmark runs uninterrupted (up to 40 hours in pro version, 10 minutes in the free version) and should more evenly load the GPU and CPU. To us, the best feature is a frame-rate stability check which issues a pass/fail based upon FPS performance during OC stability benchmarking. In theory, the tool should analyze for FPS consistency, which will give users an idea of potential OC limitations (like voltage or TDP).
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.