As we’ve done in the past for GTA V and Watch_Dogs 2, we’re now taking a look at Destiny 2’s texture resolution settings. Our other recent Destiny 2 content includes our GPU benchmark and CPU benchmark.
All settings other than texture resolution were loaded from the highest preset and left untouched for these screenshots. There are five degrees of quality, but only highest, medium, and lowest are shown here to make differences more obvious. The blanks between can easily be filled in.
Blizzard announced in January that Overwatch had surpassed the 25 million player milestone, but despite being nearly a year old, there’s still no standardized way to benchmark the game. We’ve developed our own method instead, which we’re debuting with this GPU optimization guide.
Overwatch is an unusual title for us to benchmark. As a first person shooter, the priority for many players is on sustained high framerates rather than on overall graphical quality. Although Overwatch isn’t incredibly demanding (original recommended specs were a GTX 660 or a Radeon HD 7950), users with mid-range hardware might have a hard time staying above 60FPS at the highest presets. This Overwatch GPU optimization guide is for those users, with some graphics settings explanations straight from Blizzard to GN.
We’re on to Episode 43 of Ask GN, which means we’ve past 42 – which means that the we missed the perfect opportunity to answer questions about “life, the universe, and everything.” Ah, well.
In episode 43, we’re talking skills to figure out if your CPU is bottlenecking your GPU (or vice versa), laptop thermal refurbishment (copper shims, thermal pads, thermal paste), and more. A good few minutes of the video is spent addressing a question about “Temporal Filtering,” one of the new-ish settings that’s been in a few Ubisoft games lately. Watch Dogs 2 most recently makes use of Temporal Filtering. We define that here.
For written content today, check out our revised WD Blue vs. Black vs. Red guide that defines Western Digital’s rainbow of hard drives. It’s been updated a bit since our original piece.
The goal of this content is to show that HBAO and SSAO have negligible performance impact on Battlefield 1 performance when choosing between the two. This benchmark arose following our Battlefield 1 GPU performance analysis, which demonstrated consistent frametimes and frame delivery on both AMD and nVidia devices when using DirectX 11. Two of our YouTube commenters asked if HBAO would create a performance swing that would favor nVidia over AMD and, although we've discussed this topic with several games in the past, we decided to revisit for Battlefield 1. This time, we'll also spend a bit of time defining what ambient occlusion actually is, how screen-space occlusion relies on information strictly within the z-buffer, and then look at performance cost of HBAO in BF1.
We'd also recommend our previous graphics technology deep-dive, for folks who want a more technical explanation of what's going on for various AO technologies. Portions of this new article exist in the deep-dive.
A frame's arrival on the display is predicated on an unseen pipeline of command processing within the GPU. The game's engine calls the shots and dictates what's happening instant-to-instant, and the GPU is tasked with drawing the triangles and geometry, textures, rendering lighting, post-processing effects, and dispatching the packaged frame to the display.
The process repeats dozens of times per second – ideally 60 or higher, as in 60 FPS – and is only feasible by joint efforts by GPU vendors (IHVs) and engine, tools, and game developers (ISVs). The canonical view of game graphics rendering can be thought of as starting with the geometry pipeline, where the 3-dimensional model is created. Eventually, lighting gets applied to the scene, textures and post-processing is applied, and the scene is compiled and “shipped” for the gamer's viewing. We'll walk through the GPU rendering and game graphics pipeline in this “how it works” article, with detailed information provided by nVidia Director of Technical Marketing Tom Petersen.
We spoke exclusively with the Creative Assembly team about its game engine optimization for the upcoming Total War: Warhammer. Major moves to optimize and refactor the game engine include DirectX 12 integration, better CPU thread management (decoupling the logic and render threads), and GPU-assigned processing to lighten the CPU load.
The interview with Al Bickham, Studio Communications Manager at Creative Assembly, can be found in its entirety below. We hope to soon visit the topic of DirectX 12 support within the Total War: Warhammer engine.
Our East Coast Game Conference coverage kicks-off with Epic Games' rendering technology, specifically as it pertains to implementation within upcoming MOBA “Paragon.” Epic Games artist Zak Parrish covered topics relating to hair, skin, eyes, and cloth, providing a top-level look at game graphics rendering techniques and pipelines.
The subject was Sparrow, a Braid-like playable archer hero with intensely detailed hair and lighting. Parrish used Sparrow to demonstrate each of his rendering points – but we'll start with sub-surface scattering, which may be a bit of a throwback for readers of our past screen-space subsurface scattering article (more recently in Black Ops III graphics guide).
In our latest graphics technology interview – one of many from GDC 2016 – we spoke with Crytek's Frank Vitz about CryEngine's underlying graphics tech. Included in the discussion is a brief-but-technical overview of DirectX 12 integration (and why it's more than just a wrapper), particle FX, audio tech, and more.
This is following Crytek's announcement on CryEngine V (published here), where we highlighted the company's move to fully integrate DirectX 12 into the new CryEngine. As an accompaniment to this interview, we'd strongly encourage a read-through of our 2000-word document on new graphics technologies shown at GDC 2016.
GDC 2016 marks further advancement in game graphics technology, including a somewhat uniform platform update across the big three major game engines. That'd be CryEngine (now updated to version V), Unreal Engine, and Unity, of course, all synchronously pushing improved game fidelity. We were able to speak with nVidia to get in-depth and hands-on with some of the industry's newest gains in video game graphics, particularly involving voxel-accelerated ambient occlusion, frustum tracing, and volumetric lighting. Anyone who's gained from our graphics optimization guides for Black Ops III, the Witcher, and GTA V should hopefully enjoy new game graphics knowledge from this post.
The major updates come down the pipe through nVidia's GameWorks SDK version 3.1 update, which is being pushed to developers and engines in the immediate future. NVidia's GameWorks team is announcing five new technologies at GDC:
Volumetric Lighting algorithm update
Voxel-Accelerated Ambient Occlusion (VXAO)
High-Fidelity Frustum-Traced Shadows (HFTS)
Flow (combustible fluid, fire, smoke, dynamic grid simulator, and rendering in Dx11/12)
GPU Rigid Body tech
This article introduces the new technologies and explains how, at a low-level, VXAO (voxel-accelerated ambient occlusion), HFTS (high-fidelity frustum-traced shadows), volumetric lighting, Flow (CFD), and rigid bodies work.
Readers interested in this technology may also find AMD's HDR display demo a worthy look.
Before digging in, our thanks to nVidia's Rev Lebaredian for his patient, engineering-level explanation of these technologies.
NVidia's implementation of volumetric lighting utilizes tessellation for light shafts radiation and illumination of air. This approach allows better lighting when light sources are occluded by objects or when part of a light source is obfuscated, but requires that the GPU perform tessellation crunching to draw the light effects to the screen. NVidia is good at tessellation thanks to their architecture and specific optimizations made, but AMD isn't as good at it – Team Red regularly struggles with nVidia-implemented technologies that drive tessellation for visual fidelity, as seen in the Witcher's hair.
When benchmarking Fallout 4 on our lineup of GPUs, we noticed that the R9 390X was outclassed by the GTX 970 at 1080p with ultra settings. This set off a few red flags that we should investigate further; we did this, tuning each setting individually and ultimately finding that the 970 always led the 390X in our tests – no matter the configuration. Some settings, like shadow distance, can produce massive performance deltas (about 16-17% here), but still conclude with the 970 in the lead. It isn't until resolution is increased to 1440p that the 390X takes charge, somewhat expected given AMD's ability to handle raw pixel count at the higher-end.
Further research was required.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.