No Man's Sky Frametime Performance Review: Not Even on a Titan XP
Sunday, 14 August 2016It looks like that 4K screenshot we were provided was created in some magical developer environment. No Man's Sky is the least stable game that we've worked on in years. Most games with poor optimization are still playable, and are consistent in their pitfalls; consistency permits some level of comparative benchmarking. With No Man's Sky, we're seeing nearly constant stutters and spectacular frame latency spikes in excess of 4000ms. The game exhibits severe stuttering that makes it unplayable at times and, by extension, impossible to accurately benchmark.
We've generated a few sets of benchmark data on a Titan X Pascal, 980 Ti, and RX 480 specifically to demonstrate just how wildly unpredictable the performance is. This is not an instance where we can just test anyway and produce charts as normal, because the FPS range is so wide that you'd end up with performance results that make no sense – like a GTX 1080 performing equally to a 980 Ti in averages in one test, but the opposite in another. That's due to variance introduced from somewhat unpredictable frame latency fluctuations, something we explain in part in this video.
Game-specific GPU benchmarks serve a single purpose: Hierarchically ranking the “best” products for each graphics configuration, resolution, and budget. The very heart of game benchmarking is to produce an objective comparative analysis between components. We have decided to present our findings with No Man's Sky, but have opted out of an immediate graphics card benchmark. This is because, in our eyes, such a benchmark would not be fair to the GPUs. The variance in results is so great that listings end up chaotic, and so we end up constrained and benchmarking the poor performance of No Man's Sky, rather than the performance of the cards themselves. The game is inadequate as a test platform, and cannot be trusted to generate reliable, replicable data from one test iteration to the next.
This GPU performance analysis of No Man's Sky looks at stutters and frame drops (“hitching”), poor optimization, screen flickering, and low FPS.
GTX 1060 “SLI” Benchmark – Outperforms GTX 1080 with Explicit Multi-GPU
Tuesday, 19 July 2016Just to be clear straight-away: This test was largely conducted under the context of “because we can.” For the full, in-depth GTX 1060 review, check this article. Also note that this test does not make use of the Scalable Link Interface, and so we're throwing scare quotes around “SLI” just for clarity. The GTX 1060s do not have SLI fingers and can only communicate via the PCIe bus, without a bridge, thereby demanding that applications support MDA (Multi-Display Adapter) or LDA Explicit (Linked Display Adapter) to actually leverage both cards. NVidia does not officially support dual GTX 1060s. This was just something we wanted to do. We also do not recommend purchasing two GTX 1060s for use in a single gaming system.
All that stated, this test pairs an MSI GTX 1060 Gaming X with the GTX 1060 Founders Edition card, then pits them vs. a single GTX 1060, 1080, 1070, and RX 480s (+ CF). This is mostly a curiosity and an experiment to learn, not a comprehensive benchmark or product review. Again, that's here.
Ashes supports explicit multi-GPU and has been coded by the developers to take advantage of this DirectX 12 functionality, which would also allow cross-brand video cards to be paired. We already tested that with the 970 and 390X. Testing was done at 1080p and 4K at high settings, mostly. The Multi-GPU toggle was checked for Dx12 testing. We've also listed the results as AVG ms frametimes, just for another means to convey information.
SilverStone acts like somewhat of a boutique manufacturer within the US market. The products are often unique or risk-taking, sometimes bench-topping or just plain competitive – but the brand also has lower visibility when compared against US juggernauts Corsair or global market contenders Cooler Master.
One of the newest SilverStone cases competes in the ~$70 price-point, directly matched against recently reviewed cases (Phanteks P400, Rosewill Gungnir, Corsair 400C). The SilverStone Kublai KL05-BW is on bench for review today, including case walkthrough, thermal / temperature benchmarking, cable management, and build quality analysis. The enclosure diverges from recent trends by opting-out of a PSU shroud, it's kept the optical drive bays, and has taken a minimalistic-but-effective approach to cooling. More on that below.
For years now, VR has seemed to be right around the corner, but consumer VR is (finally) becoming a reality with the HTC Vive and Oculus Rift soon hitting retailers. Unfortunately, the system requirements for VR – to the woe of my wallet – are fairly demanding.
The Oculus Rift officially recommends an nVidia 970 or AMD 290, an i5-4590, and 8GB+ of RAM. In comparison, the Vive has the same recommended specs with the exception of memory, where the Vive recommends only 4GB.
AMD R9 390 CrossFire vs. SLI GTX 970 Benchmark, Ft. Devil 13 Dual-Core 390
Thursday, 21 January 2016Our last head-to-head GPU comparison benchmarked the performance of a single GTX 980 Ti versus two GTX 970s in SLI. Following some astute reader suggestions, we've acquired a PowerColor Devil 13 dual-core R9 390 – two GPUs on one card – to test as a CrossFire stand-in against SLI GTX 970s. Performance analysis is accompanied by power draw and thermal tests, though a proper, full review on the Devil 13 card will follow this content in short order.
For today, the focus is on this head-to-head comparison. FPS benchmarks look at performance of 2x CrossFire R9 390s vs. 2x SLI GTX 970s, including supporting data from a GTX 980 Ti, 980, and R9 390X. We'll also work toward answering the question of whether CrossFire and SLI are worth it in this particular scenario, as opposed to investing in a single, more expensive GPU.
Scalable multi-card configurations from both nVidia and AMD have improved in their performance over the years, with both companies investing additional resources to driver optimizations for multi-card users. The value of SLI or CrossFire has always been debatable, particularly for day-one system builders (rather than someone upgrading), but is worth investigating further. With all the year's newest titles – and some mainstays with well-tested performance – we did that investigation, specifically comparing a single 980 Ti vs. 2x 970s in SLI, a 980, single 970, and R9 390X for AMD baseline.
Today's GTX 970 SLI vs. single 980 Ti test benchmarks average FPS and 1% / 0.1% low performance, presenting data in a few different chart types: Usual AVG, 1% low, & 0.1% low head-to-head performance; delta value (percent advantage) between the 970s in SLI and 980 Ti; delta value (percent gain) between the 2x 970s and a single GTX 970.
Star Citizen Alpha 2.0 Graphics Card Benchmark - ArcCorp, Free Flight, & Racing
Monday, 21 December 2015Cloud Imperium Games' Star Citizen achieved a major milestone with the distribution of its Alpha 2.0 package, allowing multiplayer exploration in addition to existing dog-fighting and free flight. This release gives players the first glimpse of the game's open world intentions, presenting environments forged in Sci-Fi influence.
There's not much in the way of gameplay just yet, but Alpha 2.0 has been made available to all backers for initial bug- and stress-testing. We decided to conduct a test of our own, specifically looking at GPU performance and preset scaling across multiple “game modes.” Right now, because the pre-release game is comprised of several disjointed modules, there's no one “Play Star Citizen” button – it's split into parts. Racing, free flight, and dog-fighting are in one module (Arena Commander), the Hangar stands alone, and online testing with ArcCorp and Crusader were just released.
For our Star Citizen video card benchmark, we look at GPU vs. GPU performance in the race, delta performance scaling on ArcCorp and in the hangar or free flight, and talk methodology. The game isn't done and has yet to undergo performance optimizations and official driver support, so we won't be recommending the usual “best graphics cards for [game]” this time, as we usually do in our game benchmarks.
Just Cause 3 Video Card Benchmark - Anomalous Performance, but Overall Reasonable
Tuesday, 01 December 2015Rico's back in town. This time, the vigilante who saves the people by blowing up The People's Water Tower comes in high-fidelity graphics with a focus on lighting FX and water tech. Just Cause 3 revisits a partnership with nVidia's GameWorks development kit, making use of the WaveWorks tech that was previously found in Just Cause 2 (a 2010 release). The game's graphics settings are fairly simple for anyone following our game benchmarks, but we'll recap potential points of confusion further down.
Our Just Cause 3 GPU benchmark puts nVidia & AMD graphics cards to the test at 1080, 1440, and 4K resolutions, using “Very High” and “High” settings for FPS testing. Among others, the video card benchmark includes the 980 Ti (+ SLI), 980, 970, 960, et al., and AMD's 390X, 380X (+ CrossFire), 290X, 270X, et al.
We've noticed some curiosities with Just Cause 3's implementation of water detail scaling and will cover that further down.
Overwatch Video Card Benchmark – A Scalable Title Tested at 1080, 1440, 4K
Monday, 23 November 2015Forthcoming team shooter Overwatch is Blizzard's first new IP in years, fusing familiar FPS and team-based elements with MOBA-like playable characters. That, at its core, is what we'd call a “team shooter,” a genre that's been popularized most recently by Team Fortress 2.
The game is still going through closed beta testing, with select Battle.net accounts receiving invites to play-test the game over a few weekends. This weekend's test was, according to Overwatch PR Manager Steven Khoo, an attempt at learning “how Overwatch runs on your system” and a reach-out for “technical feedback.” We figured we'd throw ten video cards at the game and see how it does.
Overwatch isn't particularly GPU intensive, but it does make use of some advanced shadow and reflection techniques that can impact FPS. We performed some initial settings analysis – shown further down – to determine top-level performance impact on a per-setting basis. This is the basis of our eventual graphics optimization guide (see: Black Ops equivalent), something we'll finalize at the game's launch. For now, the goal was to provide a foundation upon which to base our GPU test methodology with Overwatch. This graphics card benchmark looks at the best GPUs for Overwatch (beta), testing 1080p, 1440p, and 4K resolutions across “Epic” and “Ultra” settings.
Star Wars Battlefront CPU Benchmark – When Does the GPU Bottleneck?
Sunday, 22 November 2015Battlefront is one of the best-optimized games right now, strictly looking at the graphics-versus-framerate output across multiple GPUs. The game fronts brilliant shading, lighting, and post-FX, leveraging what appears to be some form of PBR (though we're not positive) to create a more realistic aesthetic without hammering draw calls and polys.
That was all tested on an X99 platform, though, so we figured it'd be worth a look at Battlefront's fluidity across our (still limited) CPU suite. We benchmarked Battlefront with the Intel lineup (G3258 to i7) and some of AMD's FX CPUs, including one APU + dGPU combination. Anything not present here means one of two things: We either don't have it or it is presently being used for another benchmark, which accounts for quite a few CPUs, given game launch season.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.