The Future of Game Graphics: NVIDIA Hair, Fire, Smoke, & Volumetric FX

By Published March 26, 2014 at 3:01 pm

Day one of GTC saw the presentation of nVidia’s refreshed lineup of VisualFX SDK tools, including GameWorks, FaceWorks, HairWorks, WaterWorks, and other *Works software. These Software Developer Kits are used in aiding the development of games, including the optimization of graphically-intensive dynamic elements, like fur, fire, and smoke. Graphics and CPU technology companies are often behind many of the game industry’s visual advancements, but as PC gamers, we don’t see much of it actually in-game for several years. Development time is part of this, adoption is part of this, and consoles are responsible in part.

 

Let’s look at some of nVidia’s more recent changes for character faces, real-time smoke and fire effects, pre-baked lighting effects, subsurface scattering, deep surface scattering, and fur/hair technology. It seems pertinent to recommend watching Epic’s Unreal Engine tech demo as well, since it utilizes many of these technologies and render methods; you can read our full article on UE4 here.

NVidia HairWorks (formerly ‘FurWorks’) – Half-a-Million Hairs Rendered

Hair and fur have historically been among the most difficult items to graphically simulate with technology. Hair simulation can involve hundreds of thousands of objects, each requiring independent physics calculations for movement, collision, lighting interactions, and so forth. Because of this, we don’t see individual hairs on our characters or fur on wolves; we’ll generally get a texture that employs tricks and maybe has a few patches of fur, like in the case of TES games, just a giant, clumpy model of hair with a few strays sticking out to bounce around.

HairWorks’ featurset includes the following:

  • Support for off-the-shelf grooming tools.
  • Support for collision shaping (hairs having collision).
  • Shaping & styling control.
  • Self-shadowing hair/fur.
  • Body-to-hair shadow casting.
  • Wind interaction.
  • Levels of detail (can be toned-down for lower-end PCs).
  • Scalability.

Before diving into HairWorks, some games that presently use HairWorks in shipping PC versions include:

  • The Witcher 3.
  • Call of Duty: Ghosts.

HairWorks was first demoed as “FurWorks” about 4-5 years ago, where we saw a red-haired 3D model with very shiny, shampoo-commercial-esque hair. Things have been relatively quiet since then, other than the odd AMD TressFX announcement, but there have still been advancements.

The SDK originally focused entirely on fur, but has grown to cover longer hairs and other hair-like objects. In theory, an SDK like HairWorks brings us a full coat of hair with individually collisioned, lit, self-shadowed, dynamic objects for each hair. Because rendering half-a-million hairs on a single wolf would be abusive on a GPU (not to mention the other things on the screen that need to be drawn), we can deploy a more limited set of control curves (“guides,” think of these as “guide hairs”) atop the surface/growth mesh of the model by using standard tools – like Max or Maya, or whatever else suits the developer’s fancy.

Once these guide hairs are applied, the data is instanced in nVidia’s DirectX runtime, which then “generates more hairs on-the-fly and simulates their motion on the GPU.” The runtime and GPU work together to interpret the guide hairs and extrapolate a number of additional hairs that are created in real-time on the object. Even though they’re generated live, the hairs filling in the patches still have individual object collision and other attributes. Each hair can move independently and dynamically of the others, but will not noclip through adjacent hairs.

Thickness, stiffness, density, and length of the hair are all independently-controlled parameters for creature models.

For real-world samples, you can turn toward the Witcher 3’s horse and Call of Duty: Ghosts’ wolf. NVidia is presently working on more advanced collision handling for HairWorks, so it’s got a ways to improve yet.

nv-sdk-fur1

Guide hairs are placed on every vertex of the mesh and are ultimately capable of deriving upwards of 500,000 hairs from something like 10,000 guide hairs. This means the GPU is working to render 10k guide hairs and can then extrapolate the rest, severely mitigating consumption of GPU cycles and other system resource consumption. As far as the viewer is concerned, they’re looking at 500,000 individual strands. As far as the GPU is concerned, only 10,000 hairs are stored in the GPU buffer, the rest filled-in using the runtime + GPU pipeline.

Straight from the presentation, nVidia’s Tae-Yong Kim explained the HairWorks Runtime as follows; my translation is [in brackets]:

 

  • Update skinning on the growth mesh and hair root position. [We’re preparing the mesh for where the hairs will be located].
  • Use the skinning data to compute updated ‘target shape’ of hair curves. [Now we are telling the system to define the shape of the hair based on the quad lines; more on this below].
  • Simulate guide curves (DX11 Compute Shader). [Our guides for where hairs will ‘sprout’ are placed].
  • Convert guide curves into more refined spline curves (4x more CVs). [More vertices are added to give greater detail to the hairs].
  • For each triangle of the mesh, tessellate/interpolate many more render hairs (DX11 Tesselation). [Extrapolate based on the guide hairs and draw more hairs in the area].
  • Render shadows and final hair color along with choices on AA. [Self-shadowing hairs and other shadows are drawn; the hair is painted with a color].

 

Here’s a look at the hair shader pipeline, for those of you who are more interested in the technical details of the underlying SDK:

nv-sdk-fur2

Every line is converted into a quad. This allows the hairs to have varied length and attributes at different points along the individual strand. If a developer wanted a wolf’s hair to be skinny and pointy toward the end, but thicker at the base, this would enable that. HairWorks will look at how other hairs are casting light and diffusions, then ensure they each interact with one another appropriately.

NVidia’s software engineers are presently in the process of experimenting with longer hair. We were told that “good collisioning is essential” to producing a complete graphics experience. We will slowly see this tech start moving toward character model hair (hopefully getting past the ugly presets we’ve been stuck with for decades). As far as environmental effects – like wet hair and dirty hair – nVidia said that the rendering is already there, we “just” need simulation; hair clumping already has integration and hair can be made shinier from more specular mapping/effects, but wetness and dirt are still on the table for development.

Most of the technology is processed within the Pixel Shader; shadow sampling is pushed to the Domain Shader. Moving on to light scattering and FaceWorks.

NVidia’s FaceWorks – Subsurface Scattering; Deep Scattering

Nathan Reed walked us through GameWorks and FaceWorks. Ira, nVidia’s resident uncanny valley model, showed off FaceWorks last year to demonstrate the progression toward face realism that lands within the realm of question (at least, as far as our brains are concerned). Here’s a recap to remind you:

New subsurface scattering and branching deep scattering techniques were shown at GTC 14. Subsurface scattering is the matter in which light bounces around on a 3D model (normally using a 1-pass method); an example would be to look at the grittiness of a face without SSS, then add SSS to notice an appearance of smoothness, light reflectivity of the skin (on the cheeks, for instance, where the sun’s light would hit), and a somewhat greasy appearance, as human skin tends to be. Deep Scattering is new: DS is a branch of subsurface scattering that illuminates thinner objects from within, scattering the light beams within (under) the surface. The example provided was that of a human ear or nose, where bright light shining through from the side opposing the camera yielded the dim red color with underlying veins that you’d see in the real world.

nv-sdk-sss1

The above shows Ira with and without subsurface scattering. If developers multiply by the texture, we’ll see veins, bones, and other sub-surface structures revealed in appropriate amounts.

This works only with D3D11 and Windows in its current state, but the presenters were keen to continually say “but other APIs will be added in the future.” OpenGL is on the list of potentially-supported options, as is Dx12 (obviously). No word on Mantle, but it seems highly unlikely.

nv-sdk-sss2

The components of subsurface scattering, as defined by Reed, are (1) geometric curvature, (2) normal maps, (3) shadow edges, and (4) ambient. The new FaceWorks SDK can also diffuse light into the darkness of a shadow with thanks to SSS, making overall smoother shadows and shadows with light beams ‘cutting through’ them.

If you’re curious as to how much load is placed on the GPU when games run SSS and other FaceWorks tech, nVidia supplied this normalized benchmark chart:

nv-sdk-sss3

“No FaceWorks” would be a game not using SSS or deep scattering. Lower is better here, so SSS causes a 1.29x GPU time increase; SSS + deep scattering bumps us to 1.37x, which isn’t that bad if we’re running on high-end cards.

The Future of Subsurface Scattering, FaceWorks, & Character Rendering Tech

Looking forward, nVidia emphasized that the future of FaceWorks will look toward these upgrades:

  • Ambient-light deep scatter.
  • Specular model for skin (including occlusion).
  • Eye rendering (turns out eyes aren’t so easy).
  • Customizable diffusion profiles.
  • More support for APIs & platforms (OpenGL, mobile, console).

NVidia FlameWorks – Volumetric Fire & Smoke FX

Another component of the revamped SDK is the FlameWorks system. FlameWorks brings volumetric fire and smoke effects to games by using a grid-based fluid simulator (fire and gases are a fluid, as far as GPUs are concerned) and volume rendering. Particles are not used for this process – it is entirely volumetric and voxel-based.

FlameWorks uses ‘Emitters’ to add density, temperature, fuel, and velocity to the simulation. Interestingly, FlameWorks is capable of moving density through the simulation (called ‘Advection’), so temperature and fuel are stored as data by emitters within the simulation; fuel can then be depleted within FlameWorks, acting as a bit of a showcase as to why devs would want to track this type of data. Further, shapes will interact with fluids (again, fire/gas/air), creating deformable volumetric effects without particles in real-time. If you were to drop a box in front of the firestream, the stream will diverge around the box and then converge again around the rear-side, forming a bit of a back-drafting wake of flame. This has great implications for games that’d like to approach realism when using obstacles as live cover systems.

nv-sdk-fire1 nv-sdk-fire2

nv-sdk-volume5 nv-sdk-volume4 nv-sdk-volume3 nv-sdk-volume2 nv-sdk-volume1

A live combustion model is contained within FlameWorks that actively checks for ignition fuel and a temperature threshold that must be surpassed to achieve ignition. Once the temperature threshold for the ignition source is breached, combustion occurs and expansion is generated in a manner appropriate for the fuel source and environment. Heat hazes, temperature cooling, thermal dissipation, and color-mapping can all be modeled within FlameWorks.

Rays are cast from fire and light sources to produce flickering or other realistic lighting effects. Developers can cheat with light sources to reduce their own workload and workload on the behalf of the GPU, but that becomes more of a question of methodology.

Different densities can be defined dependent upon how resource-intensive the developer wants the application to be.

nv-sdk-density

A Quick Look at WaveWorks

I’m not going to spend as much time on this section since WaveWorks has been previously exposed in great detail by nVidia.

WaveWorks’ most interesting points to emphasize on are its real-world integrations. The tool has been built as an MMO solution, interestingly, so all clients connected to the server will see the same wave and weather effects. Planetside 2 and Hawken have both used WaveWorks in creative ways to deal with things like teleporter pads that feature swirling, dynamic particles.

FLEX and the Future

FLEX is a new physics calculation utility by nVidia that provides developers with, quite simply, a very cool subset of real-time physics options. As an example, FLEX enables bouncing water balloons that can burst (with real-time, procedural physics effects), piles of particles with appropriate friction, deformable cloth, rigid-to-fluid state changes, soft collision with clothing, and more. FLEX is perhaps one of the most interesting Software Developer Toolkits to talk about, and so we’ll revisit this topic in great, focused depth in the near future.

What do you think about all these new utilities? Let us know below!

- Steve "Lelldorianx" Burke.

Last modified on March 26, 2014 at 3:01 pm
Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge