GDC 2016 marks further advancement in game graphics technology, including a somewhat uniform platform update across the big three major game engines. That'd be CryEngine (now updated to version V), Unreal Engine, and Unity, of course, all synchronously pushing improved game fidelity. We were able to speak with nVidia to get in-depth and hands-on with some of the industry's newest gains in video game graphics, particularly involving voxel-accelerated ambient occlusion, frustum tracing, and volumetric lighting. Anyone who's gained from our graphics optimization guides for Black Ops III, the Witcher, and GTA V should hopefully enjoy new game graphics knowledge from this post.

The major updates come down the pipe through nVidia's GameWorks SDK version 3.1 update, which is being pushed to developers and engines in the immediate future. NVidia's GameWorks team is announcing five new technologies at GDC:

  • Volumetric Lighting algorithm update

  • Voxel-Accelerated Ambient Occlusion (VXAO)

  • High-Fidelity Frustum-Traced Shadows (HFTS)

  • Flow (combustible fluid, fire, smoke, dynamic grid simulator, and rendering in Dx11/12)

  • GPU Rigid Body tech

This article introduces the new technologies and explains how, at a low-level, VXAO (voxel-accelerated ambient occlusion), HFTS (high-fidelity frustum-traced shadows), volumetric lighting, Flow (CFD), and rigid bodies work.

Readers interested in this technology may also find AMD's HDR display demo a worthy look.

Before digging in, our thanks to nVidia's Rev Lebaredian for his patient, engineering-level explanation of these technologies.

NVidia's implementation of volumetric lighting utilizes tessellation for light shafts radiation and illumination of air. This approach allows better lighting when light sources are occluded by objects or when part of a light source is obfuscated, but requires that the GPU perform tessellation crunching to draw the light effects to the screen. NVidia is good at tessellation thanks to their architecture and specific optimizations made, but AMD isn't as good at it – Team Red regularly struggles with nVidia-implemented technologies that drive tessellation for visual fidelity, as seen in the Witcher's hair.

When benchmarking Fallout 4 on our lineup of GPUs, we noticed that the R9 390X was outclassed by the GTX 970 at 1080p with ultra settings. This set off a few red flags that we should investigate further; we did this, tuning each setting individually and ultimately finding that the 970 always led the 390X in our tests – no matter the configuration. Some settings, like shadow distance, can produce massive performance deltas (about 16-17% here), but still conclude with the 970 in the lead. It isn't until resolution is increased to 1440p that the 390X takes charge, somewhat expected given AMD's ability to handle raw pixel count at the higher-end.

Further research was required.

During the GTA V craze, we posted a texture resolution comparison that showcased the drastic change in game visuals from texture settings. The GTA content also revealed VRAM consumption and the effectively non-existent impact on framerates by the texture setting. The Witcher 3 has a similar “texture quality” setting in its game graphics options, something we briefly mentioned in our Witcher 3 GPU benchmark.

This Witcher 3 ($60) texture quality comparison shows screenshots with settings at Ultra, High, Normal, and Low using a 4K resolution. We also measured the maximum VRAM consumption for each setting in the game, hoping to determine whether VRAM-limited devices could benefit from dropping texture quality. Finally, in-game FPS was measured as a means to determine the “cost” of higher quality textures.

“Tessellation” isn't an entirely new technology – Epic and Crytek have been talking about it for years, alongside nVidia's own pushes – but it's been getting more visibility in modern games. GTA V, for instance, has a special tessellation toggle that can be tweaked for performance. Like most settings found in a graphics menu, the general understanding of tessellation is nebulous at best; it's one of those settings that, perhaps like ambient occlusion or anti-aliasing, has a loose tool-tip of a definition, but doesn't get broken-down with much depth.

As part of our efforts to expand our game graphics settings glossary, we sat down with Epic Games Senior Technical Artist Alan Willard, a 17-year veteran of the company. Willard provided a basic overview of tessellation, how it is used in game graphics, GPU load and performance, and implementation techniques.

Physically-based rendering promises photorealistic lighting in 3D environments by offering a mathematical, less production-intensive approach to the rendering of light. The three letter acronym – “PBR” – has circulated lately as industry frontrunners like Chris Roberts (Star Citizen) have touted its presence in triple-A titles. A few game engines come to mind when we think of advanced, hyper-realistic graphics capabilities; one of those engines is the CryEngine, developed and maintained by Crytek and best-known for advancing PC graphics with Crysis.

While at GDC, we had the opportunity to ask a pair of CryEngine developers to explain what PBR is, how it affects gameplay, and if it has any performance or production impact to games. Joining us in the below video is 3D Environment Artist Sina Els and Engine Programmer Scott Peter, providing a top-level definition of physically-based rendering and its uses.

The only perceivable competitive threat faced by the world’s most successful silicon company, Intel, is the one posed by ARM. For an understanding of just how large Intel is, we can use market capitalization as a relative measurement: AMD sits under $3B these days, NVIDIA (for point of reference) is marked at $12.19B, ARM has grown to $25.5B, and Intel’s market cap rests near a staggering $161B. AMD is a non-threat, but ARM has continually ensured fierce competition in the mobile and integrated devices markets with its low-TDP, high-performance processors.

ARM wasn’t at GDC to talk about its CPUs, though.

Part of our daily activities include extensive graphics benchmarking of various video cards and games, often including configuration, OC, and performance tweaks. As part of these benchmarks, we publish tables comparing FPS for the most popular graphics cards, ultimately assisting in determining what the true requirements are for gaming at a high FPS.

Although our test methodology includes extra steps to ensure an isolated, clean operating environment for benchmarking, the basics of testing can be executed on everyday gaming systems. This article explains how to benchmark your graphics card, framerate (FPS), and video games to determine whether your PC can play a game. Note that we've simplified our methodology for implementation outside of a more professional environment.

As a part of our new website design – pending completion before CES – we've set forth on a mission to define several aspects of GPU technology with greater specificity than we've done previously. One of these aspects is texture fill-rate (or filter rate) and the role of the TMU, or Texture Mapping Units.

When listing GPU specifications, we often enumerate the clockrate and TMU count, among other specs. These two items are directly related to one another, each used to extrapolate the “texture filter rate” of the GPU. The terms “Texture Fill-Rate” and “Texture Filter Rate” can be used interchangeably. For demonstration purposes, here is a specifications table for the GTX 980 (just because it's recent):

We're currently in the process of GPU benchmarking Lords of the Fallen, a game that our own Nick Pinkerton previewed back at PAX Prime 2013. The game hosts impressive graphics technology in partial thanks to partnership with nVidia, who offer their GameWorks graphics SDK freely to game developers.

lords-of-fallen-1

Developers CI Games and Deck13 utilized GameWorks (detailed here) to introduce physics-responsive particle effects, soft body (cloth, fabric) physical effects, volumetric lighting that responds to transparency and surface opacity / reflectivity, and destructible environment effects.

There's been a lot of discussion about Titanfall's performance lately. Our most recent Titanfall GPU performance benchmark showed that the game still exhibits serious issues on certain devices; nVidia cards showed severe stuttering, SLI has micro-stuttering and works better disabled, and the game is simply needlessly large. All these taken into account, the performance issues feel almost unjustified for the visuals -- the game looks fine, sure, but it's not melt-your-GPU level of graphics and certainly isn't spectacular to look at. It's another Source Engine game with adequate graphics. And I'm not saying that's a bad thing, so please don't get me wrong -- just that the performance isn't perfectly-tuned, at least, not yet. More drivers and patches will smooth that out.

titanfall-1

I don't want to come off as too harsh, though. The mechanics are enjoyable for certain types of players and the game overall seems 'good,' it's just experiencing some (now-standard) launch issues with PC optimization. All is survivable, though.

 

Page 2 of 3

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge