Continuing the Discussion: Character Technology
Star Citizen's character technology has undergone an overhaul since the Morrow Tour demonstration, with the team primarily focused on improving meshes around the eyes (the 'soft skin') and eye movement. The latter is done with an “eye posing” system that CIG has instituted into the engine. Eye posing works with surrounding skin in the face to ensure that the eyes don't move alone – there's other facial movement that corresponds to different expressions and posed directions, making for a more cohesive face altogether.
Of working with the new head technology and perspective from other games, Sean Tracy told us:
"So, it's not really new character tech – well, it is a little bit new. Mostly, this is the stuff that we've been working on for years. Last year at CitizenCon we showed something called the Morrow Tour. We had only just started to receive the head rigs, head assets from 3Lateral [for] about a month, and then we went ahead and just put 50 characters in. They looked varying degrees of art progress. A lot of the game – we're happy to show WIP stuff, but the public can react pretty negatively to it. They don't get it. What we really want to show is the progress of that over the years, so all of the tech that we expected is online, polished, working.
“We had the entire year to get about 120 scanned heads in, and we have, even just in the demo that we're gonna show, we have 53 unique characters. Unique face, unique facial rigs. It's a ton of work. Just to contrast it again, [in] Crysis we're talking less than 20 characters the whole game. Ryse, we're talking maybe 30 – and probably 15 of those are barbarian variations. For this, because we've got all these actors, we've paid for the actors, so we're going to have a really nice scan of their head and we want them to look awesome in it, and they do now. All the tech is finally online for the faces to animate correctly, for it to get triggered with the dialogue.”
Tracy goes on to detail the team's new runtime rig logic system, which allows the moods and facial expressions to be universally applied to characters. Rather than creating, for instance, an expression structure of /happy/male/happy_male01, or similar, the team can just cut to “happy01,” in this case, then apply that globally and allow the runtime to work out the rest.
“Finally, there's one really big piece of tech that we're bringing online now. We call it "runtime rig logic" for the faces. We have varying facial skeletons, everybody's got a bit of a different face. A lot of games [...] unify the skull shape and unify the neck shape, and everybody's the same. That doesn't work for actors – especially not for really recognizable ones, like Gary Oldman, or Gillian Anderson, or Mark Hamill. You're going to know that that doesn't really look like him. So, what we do is we do still have all these unique rigs, but what we have is a system within the engine that is actually consuming unified animation data and it applies all the offsets to that animation data so we can drive that rig. What it means is that we can share animation across anybody, which is super cool. A smile on Gillian Anderson is actually the same data for a smile on Mark Hamill. We can share all this data. This is a pretty big deal.”
This system allows for procedurally driven character reactions, granting greater depth to the potential count of animations and expressions within the game. The rig logic skeleton has 183 inputs that drive the entirety of the face, and that's true for every character – at least, every human character. It's likely true for other humanoids, but we didn't ask about other potential species. Those 183 inputs work with another ~220 skin joints, creating a highly detailed rig.
“The other thing with it is [that] we can procedurally drive things on the character, and have the rig react as if we were exporting animation. A big example of this is eyes. What we have is 'look posing' system – I'll get into that. We'll drive where the eyes are looking. The problem with this is that the eyes are all connected to blend shapes; usually, if any other game would do this, you see the eyes move around but none of the blend shapes would do anything because it just doesn't know that you're moving those eyes around. [...] With Rig Logic online, we can say, 'move those eyes,' and rig logic knows 'OK, I need to move this blend shape here, do this wrinkle here.' So we're getting really awesome performance even from procedurally generated data.
“[...] We apply this to every single head. There's kind of a workflow reason for that, and it's that I don't want to deal with two different pipelines. [...] I'd rather just, 'everybody's rig logic,' perfect. There's an implementation reason, too. Say in Star Marine I want a guy [to look angry] when he fires. If everybody has unique faces, I'd have to have, 'OK this guy is in his stance, he's got his weapon, and he's firing.' Here's angryface_male01, angryface_male02, they'd all be different animations for the same thing. Now what I do is I just say 'angryface,' and it'll figure out what face it's playing on and it'll do it. This is our whole mentality of content creation: Let's do it intelligently so we're not stuck here making thousands of things so that it takes us 10 years to make this game.”
Regarding Resource Management with Face Complexity
Hearing all of this about the face rigging and animation, the next obvious question was about system resource consumption. Shuffling primitives around is a lot of work on a hardware level, especially for geometrically complex faces, but Tracy reports that CIG has thought of that:
"Usually, if you were to move around verts and mesh and stuff, [resources] would be a problem. What we have is the rig logic skeleton is actually only 183 inputs, so based on those 183 inputs, we can drive everything on the face. To give you an idea of the complexity on the face, it's not 183 joints -- it's more like 500. Well, it'sactualy more like 220 skin joints, and a bunch of the blend shape controls. So we use 183 joints to drive a way more complicated rig, but that logic exists in the engine, so we save that performance.”
We then asked about headbob, which has also received recent overhauls to more accurately reflect real world head movement when running. Star Citizen is implementing what is effectively a clone of the vestibular-ocular reflex, required because of the first- and third-person unification. Without some sort of image stabilization, because the head is vertically moving as characters move, the camera would be thrust around enough to sicken or disorient the player. Minimally, it's just poor gameplay mechanics. The team has worked to eliminated this issue in first-person. The solution can be thought of as placing a camera on a gimbal, like the GoPro gimbal stabilizer, and moving the camera more fluidly to create a more fixed viewport.
"The same guy is working on the two systems [face rigging + stabilization]. He's our animation RD engineer, he's also come over from Crytek with a bunch of us – Ivo Herzeg. Chris loves to talk about it in terms of the unified stuff, and there's good reasons for it and again, I'd like to say this has more to do with workflow than anything else. If you don't have two systems to work with, you're in a good spot. A fire animation in third is the same as first, and there's a system dealing with it rather than me having to go grab the arms asset and do the same thing on the body. So there's reasons for it.
The biological term for the eye stabilization is the 'vestibular-ocular reflex.' We've talked about this so much that we've gone through medical papers on it. The easiest way to ever explain it -- [...] maybe you've seen this commercial of a chicken being held and he keeps his head stable as the thing is moving around. [...] For me, it's more of a workflow thing.”
CPU Threading & Jobs System
We've talked with Chris Roberts about the CryEngine jobs system since 2012, but the implementation has changed over time to become more comprehensive in its resource management. CIG is allowing an in-engine jobs system to manage system resource allocation, including CPU thread assignment to various tasks. This is in opposition to what game engines have traditionally done – including CryEngine, which not long ago bragged about support for 8-core CPUs. What might be unknown about that support, though, is that CryEngine manually assigned and locked various tasks to those different threads. Game logic sat on one thread, AI on another, the render thread did the bulk of the work, and so on.
With Star Citizen, the jobs system assigns tasks (“jobs”) to resources on-the-fly, making more intelligent utilization of threads. If the system sees that core 1 is presently overloaded – and it likely will still do the bulk of the work – the system might push its impending task down the pipe to another core. Maybe core 4 doesn't have anything going on right now other than tracking some simple AI, and the cores 1-3 are busy with render, game logic, and physics. As new jobs come in, they might get pushed to core 4 for processing, to ensure cores 1-3 don't become inundated with tasks which cannot be adequately completed prior to frame delivery.
Theoretically, this system scales to consume as many threads as are reasonably made available. There are always diminishing returns, but the idea is that heavy tasks can be more efficiently juggled across multiple cores. We're not sure how this will interface with new APIs, if those are ever integrated into Star Citizen, but there wasn't any news to discuss on the front of Dx12 and Vulkan at our latest tour. Our previous interview contains everything that's current on APIs.
Tracy said of the thread management:
“We've got a really nice multi-threading jobs system. We can dedicate CPUs per module, we tend not to – we let the job system handle where it needs to go. What happens with job systems, especially if you dedicate CPUs to it, [is] they can sit there idle a lot of times. They sit there waiting for work to do, so they only get given a very small portion of that frame's work, and the rest stays on main or whatnot. We try to move that stuff around with the job system itself. It's all threaded.
“We definitely operate better on more CPUs, there's no question about that – especially in terms of physics as well. [The jobs system went in with CryEngine 3], and we had hard-coded, basically, render was on the main thread, separate sound thread, physics thread, oh, and game logic was on its own thread. That was a big deal [then], now we scale dynamically to all the cores you've got. So the more cores you've got, the better – I mean, if you've got 32 cores, I don't think we have enough that it would dispatch.”
Authoring Tools & AI
During the interview, we were also presented with the opportunity to talk about the game's creation toolset and authoring tools. Tracy isn't in the design department, but was able to cover the tools used on a technical front. Regarding AI, mission authoring, and in-game content creation, Tracy said:
"[The authoring tool] is actually not even embedded within sandbox, which honestly in my opinion is a bad idea. I hate not putting stuff in sandbox because it's just immediately available to you, but it is a hell of a lot easier to write [this way]. Sandbox is written in old MFC; it's an old system from Microsoft. There's some plugins in it to write toolsets in QT, which is a lot easier to write for and it's a little more visually pleasing than MFC [where] you've just got no real control. A lot of times, what happens when somebody comes along to write a tool for the engine, they're fighting more with sandbox more than with their tool they're writing, so they'll tend to write the tool outside of it and eventually we'll integrate it in.
“We have a system that we call subsumption. It's for AI and it'll be used for missions as well as some economy stuff. It's kind of a visual scripting system, so you've got notes within it, AI has certain activities or behaviors that we can form. They can be interrupted out of them, they can go into different behaviors, we can trigger off cinematic stuff, we can do pretty much everything that the flowgraph would have done, but within a little bit more systemic of a system, rather than a bespoke 'this entity does this function does this function.' It's more of a behavior to the AI. This same idea is going to be applied to the mission system, where it's a systemic system where you're setting up – I'm not a designer, so I hope I don't misrepresent their particular system – but when they're setting up missions, it's more of small portions of it that are kind of like little activities. Within that, we'll create the bigger missions. It's actually the same idea with the AI, and it's [done within the same toolset]. It's external to the sandbox right now which, again, I hate, but it's easier for us to write it. From a speed standpoint, I totally understand it.
“This is a direction from all the tech directors in the company: 'Get it 90% of the way there with systems, then let the artist polish that.' You want to be able to bring as much content online as laissez-faire as possible, and then let designers go in and make it a little more fun. I think it was the same mentality at Crytek, so I really hope that it lends itself to not only tons of content, but content that doesn't feel generic – it doesn't feel copy-pasted. It feels very living and immersive.”
We next asked Tracy to detail some particularly critical challenges that the team has recently overcome, and to put those challenges into perspective for onlookers of development. Tracy added some information about the upcoming CitizenCon Squadron 42 demo while detailing those challenges:
“I think the implication of V2 planets and what this brings [is important]. I came from the modding side, so I didn't just jump into this as a programmer, I've always been a gamer first. I think we've gotten used to the fact that we're being lied to [by games]. We know we're in a skybox. We know that sun is just a little sprite up there. We know we're being lied to, and we're OK with it because it's fun and it looks good or whatever. But to wrap your mind around the sheer volume and scale that is there in Star Citizen, around you... it's something different and it feels different when you're in it and experiencing it. If you could go into Witcher and just run forward for – I don't even know how long it would take to get around the planet, but run forward for 24 hours and it actually be different and you wrap yourself around. It's a different mentality.
“[...] There's so much that you can explore and do, and without it being empty is an important thing. Lastly, really important to understand as well, would be the AI implications of what we're going to show for the Squadron 42 preview. The AI that's going on within there is fairly complex because it's similar to an Elder Scrolls or something, where they live their lives. These are crew mates, they're at this desk from 9-6, then they go to the cafeteria, and they go do this. What people I don't think will see immediately – and again, the demo can only be so long – I don't think people will necessarily grasp that that's happening for those characters. It will feel like, well he walks by you, great, but what you don't know is he's going down to his crew, he's got stuff to do. Those two are important to understand.”
Stars & The Sun
In most planet-side games – which is most games – the sun is either an unmoving source of light or a faked source of light. This plugs into the “we're used to being lied to” statement that Tracy made, but also reinforces the point that, normally, it doesn't really matter. Just like in movies where stunts might be done by a double, the small tricks can be ignored in favor of the greater story and immersion.
With games where the sun “sets,” like Skyrim, to borrow the Elder Scrolls example, that star is actually just a source of light that's traced across the sky. The planet isn't actually moving. Intuitively, we all know this; it'd be silly to rotate Nirn just to create a day-night cycle. That's an awful lot of logic to keep all the objects and clipping fixed, but spin them about an axis, and the player will never leave the earth (at least, without mods) anyway. The team cheats, then, and instead keeps the planet fixed while moving a source of light across the sky, pursuant to some timing. Of course, there's not really a planet, either – in this particular example, Skyrim is just a collection of cells that exist flatly and independently of one another. They don't coalesce on some sphere to form the planet of Nirn.
But again, we all intuitively know this.
With Star Citizen, it's more difficult:
“[The sun] is just in a skybox, it's just a position of a light. Really, all the sun is is a light that doesn't have a convergence point. It's a parallel thing so it just keeps going on forever. The tricky part about decoupling that is less about producing light from some point, but more about how the shadows react. In CryEngine, this is why it was complicated to pull the sun from the sky, so to speak. [...] We have a system called 'cascaded shadow maps,' it's been around for a while and I know a lot of people do use the same sort of system, where you have a really high res one, then a little lower res around it, and you can see it as a bunch of concentric squares that would just be, 'here's your first cascade, second cascade, third cascade,' and each one gets a little less granular, and the biases change.
“So we had to implement basically is cascaded shadow maps on point lights, and then we could pull them out. What is kind of cool is we actually have support for both systems right now. If I drag sunlight out, it turns off the one in the sky, so I can just delete that and it turns the sun back on, so we still have all the CryEngine time of day stuff, but we also have the ability to put our lights in.
“The other big bonus out of decoupling it is we can have multiple suns. Binary systems, or even large events within there that you might want to produce light – something gigantic, big enough that you might see light from it. I can think binary stars is the more obvious usage of it. That's an interesting thing from being able to decouple it.
“[For exploding ships]: How far do you see it? What kind of scales are we talking about? What LOD is that thing at? Do you produce light from it? Do you cast shadows? There's a lot of question marks, and a lot of the time, there's no one person to ask. Very few things [in this game] have people done before. I think that's why a lot of people really like working here, though. It's challenging, but it's also why a lot of times we hit delays, because it just hasn't been done like this before. I explain my job to people as, 'I screw up all day until I don't.'”
Additional Coverage and a Reminder on Pre-Release Games
We will be flying out to attend CitizenCon on October 9 for continued coverage of Star Citizen. Our coverage will go live the night of the event, including video coverage and (hopefully) additional interviews post-show.
As a reminder, this is a crowd-funded, incomplete game. GamersNexus takes the same stance with all pre-release games and pre-release hardware, including items which we preview ahead of launch: We recommend waiting to make purchases, especially larger purchases, until after a product has shipped and reviews or user reports are online. GamersNexus encourages that readers particularly interested in supporting a crowd-funded effort take the time to research the project beforehand, and that readers make an informed decision on any purchases. Remember that early access purchasing is a support system for developers to further fund a game, not a sale of a complete, finished product.
(Footnote: We know that the Star Citizen community is very eager to share content. We kindly remind readers that copying and pasting entire articles is damaging to a publication's ability to measure the success of content and remain in business, and thus damaging to the ability to fund future flights to make developer tours.)
Editorial: Steve “Lelldorianx” Burke
Video: Andrew “ColossalCake” Coleman