Forthcoming team shooter Overwatch is Blizzard's first new IP in years, fusing familiar FPS and team-based elements with MOBA-like playable characters. That, at its core, is what we'd call a “team shooter,” a genre that's been popularized most recently by Team Fortress 2.

The game is still going through closed beta testing, with select accounts receiving invites to play-test the game over a few weekends. This weekend's test was, according to Overwatch PR Manager Steven Khoo, an attempt at learning “how Overwatch runs on your system” and a reach-out for “technical feedback.” We figured we'd throw ten video cards at the game and see how it does.

Overwatch isn't particularly GPU intensive, but it does make use of some advanced shadow and reflection techniques that can impact FPS. We performed some initial settings analysis – shown further down – to determine top-level performance impact on a per-setting basis. This is the basis of our eventual graphics optimization guide (see: Black Ops equivalent), something we'll finalize at the game's launch. For now, the goal was to provide a foundation upon which to base our GPU test methodology with Overwatch. This graphics card benchmark looks at the best GPUs for Overwatch (beta), testing 1080p, 1440p, and 4K resolutions across “Epic” and “Ultra” settings.

Battlefront is one of the best-optimized games right now, strictly looking at the graphics-versus-framerate output across multiple GPUs. The game fronts brilliant shading, lighting, and post-FX, leveraging what appears to be some form of PBR (though we're not positive) to create a more realistic aesthetic without hammering draw calls and polys.

That was all tested on an X99 platform, though, so we figured it'd be worth a look at Battlefront's fluidity across our (still limited) CPU suite. We benchmarked Battlefront with the Intel lineup (G3258 to i7) and some of AMD's FX CPUs, including one APU + dGPU combination. Anything not present here means one of two things: We either don't have it or it is presently being used for another benchmark, which accounts for quite a few CPUs, given game launch season.

We've been conducting CPU benchmarks on Star Wars Battlefront over the past few days and, thanks to DRM install limitations, it's taken a lot longer than normally. The testing has finally progressed to low-end CPUs, including the A10-7870K ($128) and popular Intel Pentium G3258 ($50) dual-core processor. The 7870K posed no issues with Battlefront – performance is nothing phenomenal, but it works – but the G3258 didn't work at all.

This limitation was a result of Battlefront's forced CPU requirements. The game demands a quad-core CPU, and the Pentium G3258 will produce a black screen issue when launching Battlefront. Interestingly, the beta seemed to work on the G3258 just fine – again, not the best FPS, but it worked – and that has ceased with the full launch. We're seeing a black screen with max FPS (200, capped) that allows console input, but doesn't actually output video. This threw a flag that the game should work with the G3258, even if poorly, and we decided to do some research.

Assassin's Creed: Unity was the last AC title we benched, and it led us to an exciting discovery: Some games, like AC Unity did, will present a sizable performance disparity between 2GB and 4GB models of the same video card. The Assassin's Creed series has long been heralded as a standard bearer for lighting, shading, and FX technologies, emboldening its geometrically complex environments with supporting filtration and processing. Lighting and shadows propel games like Assassin's Creed to a point in visual fidelity where, without bloating poly-count beyond reason, the models and meshes look smooth and sculpted beyond the geometry's own means, were it unassisted by lighting.

Ubisoft's Assassin's Creed Syndicate was made available to us last night, at which point it was immediately used for benchmarking AMD's brand new R9 380X GPU. This graphics card benchmark of Assassin's Creed Syndicate tests the game at Ultra and Medium settings across 1080p, 1440p, and 4K resolutions. All the latest cards are present in our Syndicate GPU benchmark – the GTX 980 Ti, 980, 970, 960 4GB vs. 2GB, 950, and 750 Ti from nVidia; the R9 390X ($240), R9 380X, R9 290X, R9 285, and R9 270X from AMD.

Of most note, AC Syndicate carries Unity's legacy of truly accentuating the 2GB vs. 4GB VRAM gap in the GTX 960 cards, something that should, theoretically, propagate to other multi-size models (like the R9 380, if we had them).

October 9 saw the conclusion of a beta week full of GPU benchmarking, posting initial performance numbers that were agreeable with nearly all modern, $120+ GPUs. Our testing put everything from the 750 Ti to the 390X – and above – to the test, and we ultimately concluded that the ideal GPU selection included the R9 380 & GTX 960 for 1080p and GTX 970 & R9 390X for 1440p. But that was the beta, something we indicated amply in the first benchmark, and those numbers had potential to shift as the game approached launch.

Star Wars Battlefront is now officially out for PC. Our refreshed Star Wars Battlefront GPU benchmark tests FPS output at 1080p, 1440p, and 4K, using Ultra, High, and Medium settings. The tests, as always, bench the GTX 980 Ti vs. the 980, 970, 960, and downward, alongside AMD's R9 390X vs. the R9 290X, 285, 270X, and 250X.

Below is a look at the game's graphics settings maxed-out at 4K, followed by a quick recap of the Battlefront graphics settings and what they do.

During GPU and CPU benchmarking for Battlefront, we encountered a 200FPS framerate cap / lock when using high-end configurations. For accurate benchmarking, this FPS limit had to be removed to allow the “natural” or absolute performance of the hardware on-bench. Thankfully, being that Battlefront runs on a familiar Frostbyte engine to previous Battlefield titles, the console commands are all nearly identical to Battlefield 4.

Here's the command you want to unlock FPS and disable the FPS limit in Star Wars Battlefront:

Fallout 4 – now an entire day old – is nearing the end of its content lifecycle on our test bench. We'll soon move on to the next major hardware and games, but for now, we've got a few tests left in Bethesda's latest title. This next benchmark looks at the game's CPU performance, a greater methodological challenge than our preceding GPU benchmark, volumetric lighting benchmark, and texture quality comparison.

Our Fallout 4 CPU benchmark compares FPS across the Intel & AMD lineups, including an i3-4130, i5-4690K, i5-6600K, some i7- CPUs, and AMD's FX-8370E, 8320E, & 9590. Other CPUs were tested as well, like the popular G3258 dual-core Pentium and A10-7870K APU, and are all included on the benchmark charts. The top-level goal was to find the best CPU for Fallout 4, but we also wanted to identify performance disparities and anomalies across models.

In suit of our Fallout 4 GPU benchmark and the follow-up Volumetric Lighting Benchmark, we're now looking toward Bethesda's odd texture quality presentation. Fallout 4 includes with it a Texture Quality option, which should dictate the resolution of textures as applied to game elements. The setting scales from medium to ultra, with one step (“high”) in between. Usually – especially in Black Ops III, as we found – texture resolution can have profound impact on performance and VRAM consumption, leading us to run some texture tests in Fallout.

Here's an example of what we're used to when it comes to texture quality comparisons:

NVidia's implementation of volumetric lighting utilizes tessellation for light shafts radiation and illumination of air. This approach allows better lighting when light sources are occluded by objects or when part of a light source is obfuscated, but requires that the GPU perform tessellation crunching to draw the light effects to the screen. NVidia is good at tessellation thanks to their architecture and specific optimizations made, but AMD isn't as good at it – Team Red regularly struggles with nVidia-implemented technologies that drive tessellation for visual fidelity, as seen in the Witcher's hair.

When benchmarking Fallout 4 on our lineup of GPUs, we noticed that the R9 390X was outclassed by the GTX 970 at 1080p with ultra settings. This set off a few red flags that we should investigate further; we did this, tuning each setting individually and ultimately finding that the 970 always led the 390X in our tests – no matter the configuration. Some settings, like shadow distance, can produce massive performance deltas (about 16-17% here), but still conclude with the 970 in the lead. It isn't until resolution is increased to 1440p that the 390X takes charge, somewhat expected given AMD's ability to handle raw pixel count at the higher-end.

Further research was required.

Page 1 of 25

  VigLink badge