Primitive Discarding in Vega: Mike Mantor Interview

By Published August 05, 2017 at 2:44 pm

We took time aside at AMD’s Threadripper & Vega event to speak with leading architects and engineers at the company, including Corporate Fellow Mike Mantor. The conversation eventually became one that we figured we’d film, as we delved deeper into discussion on small primitive discarding and methods to cull unnecessary triangles from the pipeline. Some of the discussion is generic – rules and concepts applied to rendering overall – while some gets more specific to Vega’s architecture.

The interview was sparked from talk about Vega’s primitive shader (or “prim shader”), draw-stream binning rasterization (DSBR), and small primitive discarding. We’ve transcribed large portions of the first half below, leaving the rest in video format. GN’s Andrew Coleman used Unreal Engine and Blender to demonstrate key concepts as Mantor explained them, so we’d encourage watching the video to better conceptualize the more abstract elements of the conversation.

 

Talking Basics of Primitive Culling (Partial Transcript)

“Let’s just talk about the basics at the high-level. When an application sends or renders objects, every object has characteristics – a model is built for an object that’s then rendered into a 3D scene. An object is usually modeled as a complete object, so no matter how you’re viewing it and where it is in your field of view, the representation is there. […] The graphics processor processes the triangle or the objects in the field of view and decides whether or not they’re visible. One of the first things that happens when we draw an object, is we do run a shader sooner or later that processes these vertices and creates vertex positions in a common view space. Part of the state data that defines the common view space is a view frustum. That’s to decide whether or not the triangles of an object – the primitives of an object – are actually within the view or outside of the view.

“If objects are completely outside of the view, there’s no reason to send them down the graphics pipeline. Many times, applications do the first level of culling before the object is ever even sent to the GPU, or they may send a coarse representation of the geometry (just a few triangles) to query the GPU whether or not the object is even in the view. If it’s not in the view, there’s no reason to send it to the rasterizer. If it’s a very complex object, instead of taking the time to render it, you can send out a bunch of bounding volumes to find out if the object is even visible.

“When the object comes to the graphics pipe and we do the position processing, you can think that there’s different kinds of culling – part of the object might be outside of my FOV, and part of the triangles are gone. The back-side of the object is made up of triangles that are not viewable or visible. And then, obviously you can have an object that’s in space such that part of the triangle is far away and when it projects on the screen space, they become very small triangles. I really described three different scenarios there: Outside of the view frustum, the triangle might be back-faced, or the triangle in the screen space might become very small. For most content that comes to the GPU, we see on average more than 50% of the triangles that come down the pipeline are still in a place where they don’t need the render. You don’t need to scan, convert, and process them. The earlier that we can determine that the triangle is out of the view frustum, the triangle is back-faced culled, or the triangle is too small to hit any samples when you render it, the quicker we can remove any effect of that triangle from the rendering process.

“In a chipset we’ve been building for a while now, we have a vertex process that runs and it could either be a domain shader or a vertex shader, could be a vertex process on the output of a geometry shader that’s doing amplification or decimation. And at that point when you finally have the final position of the vertices of the triangle, is one point where we can always find out whether or not the triangle is inside of the frustum, back-faced, or too small to hit. From frustum testing, there’s a mathematical way to figure out whether or not a vertex is inside of the view frustum. If any one of the vertices are inside of the view frustum, then we’ll know that the triangle can potentially create pixels. To do a back-faced culling perspective, you can find two edges or with three vertices you can find one edge and a second edge, and then you can take a cross-product of that and determine the facedness of the triangle. You can then product that with the eye-ray, and if it’s a positive result, it’s facing the direction of the view, and if it’s negative it’s a back-faced triangle and you don’t need to do it. […] State data goes in to whether or not you can opportunistically throw a triangle away. You can be rendering something where you can actually fly inside of an object, see the interior of it, and then when you come outside, you can see outside-in – in those instances, you can’t do back-faced culling.”

For the rest, check the video above. This transcript covers the first ~3~5 minutes.

Editorial: Steve Burke
Video: Andrew Coleman

Last modified on August 05, 2017 at 2:44 pm
Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge