Game News stub

System Specs for Demo PCs at CitizenCon (& Optimization Explanation)

Posted on October 11, 2016

Covering the Star Citizen technology demonstrations and planetary procedural generation v2.0, we noted that the live framerate, although variable, seemed to stick around the ~96~100FPS AVG range. Even the hardest dips fell to about 75FPS, mostly when the Constellation star ship entered the camera frustum, but overall frame throughput was consistent and fairly fluid. (Note: Some folks reporting low dips to ~36-43 at times. We did not watch the FPS counter for the entire demo.) Frametimes were also on-point, sitting at an average of about ~8~10ms delta between frames, or effectively perfectly fluid on a 60-120Hz display. Z-fighting and artifacting occurred in the demo, but that is known to the team and is mostly a result of the LOD scaling and pop-in. Runt frames, however, were not much of an issue during the gameplay demonstration.

That comes down to hardware. As we detailed heavily in our Pascal architecture and Polaris architecture deep dives, this generation of hardware has focused efforts on stabilizing frame throughput for greater consistency. Variance between frame delivery exceeding that of the monitor's refresh rate, e.g. 8ms for a 120Hz display, 16ms for a 60Hz display, will create more runt frames and screen tearing at time of playback. This is because the monitor, without adaptive sync tech (which the projector almost certainly did not have), slaves to the GPU and either waits on refresh (V-Sync) for completed frames or immediately “publishes” the frames to the screen (V-Sync off). The latter creates tearing by producing runt frames which don't fully “paint” to the display, with the former producing stuttering when framerate falls below the V-Sync threshold, triggering what is effectively a reprojection of the previous frame.

The CitizenCon demonstration, unsurprisingly, was powered by one of the two major GPU architecture releases this year. CIG assembled a system using an ASUS ROG GTX 1080 Strix, built on the GP104-400 Pascal GPU and 10Gbps GDDR5X memory from Micron. For more on this, see our GTX 1080 architecture deep dive here, or our liquid-cooled 1080 round-up here). Short of products which we consider non-consumer, e.g. Titan-class or Firepro-class cards, the GTX 1080 remains the top single GPU performer on our current game benchmarks. CIG coupled this with an i7-5820K SKU CPU at its stock 3.3GHz clock-rate (no overclock), which was released on the Haswell-E architecture last year. 64GB of Corsair's Vengeance DDR4 memory (speed unspecified) was socketed in an ASUS X99-A motherboard (model unspecified), with a Corsair Vengeance C70 enclosure used for the case.

Update: The demonstration was run at 1080p.

If tables are easier for you, here it is:

Star Citizen Demo PC Build

(Note: Specs retrieved directly by GamersNexus, directly from CIG).

GN Parts ListComponentRetail Price
CPUIntel i7-5820K 3.3GHz Stock$389
GPUASUS ROG GTX 1080 Strix$710
Memory64GB Corsair Vengeance DDR4 (speed unspecified)~$230~$400
MotherboardASUS X99-A (model unspecified)No specific model
CaseCorsair Vengeance C70$101
Total w/o PSU, SSD, etc. ~$1700 +/- $300 for board & RAM

Let's Talk Optimization

Before this gets sensationalized as “Star Citizen Requires $2000 PC to Run,” let's clear-up a few things about how game development and driver development work.

Hardware, GPUs especially, don't just natively “work” with any application. Silicon manufacturers and developers must work together to ensure drivers understand and optimize performance within the drivers for that application. Datapath organization, for instance, is something that's out of CIG's hands on the driver side – we'd look to AMD and nVidia for that, who build their display drivers to optimize datapaths and leverage their vastly different architectures for each game. Frametime variance alone is a big deal in games, and can be resolved with driver updates.

Star Citizen isn't done. Drivers don't really exist for it yet. That's not news to anyone, but it's important to point out. Let's assume, just for a second, that Star Citizen shipped today. There'd probably be day-one (ish) drivers from each major GPU vendor, and those drivers would almost certainly include optimizations which improve FPS – at least low percentile performance – to a marked degree. We see this all the time when we're benchmarking games pre-release. On numerous occasions, our team has thrown out hours of test work if drivers update prior to game launch, just because we know the difference can be that critical to measure.

And then there's the game software optimization, which is in CIG's hands almost exclusively. This is stuff that CIG's Sean Tracy has told us the team is working on, including CPU thread allocation and the jobs system, framebuffer optimizations and limiting VRAM consumption, and more. Because there's no public, finished game, it doesn't make much sense to optimize yet. That's mostly done at the end, when assets are more finalized, level structures and art budgets are locked, and optimizations can be applied at a lower level to the hardware. As an example to Star Citizen specifically, the conversion to 64-bit world space coordinates theoretically imposes a marginal (but measurable) performance hit on CPUs. More time is required to calculate 64 bits, obviously. With the technical team's optimizations, Star Citizen has managed to actually post a slight performance improvement on newer CPUs (e.g. Skylake or Haswell onward). This is all done with programming, datapath management, jobs management and spawning, and other process improvement – ergo, “optimizations.”

And none of that is final. So, the point of this side discussion is to hopefully put an early halt to the “you need a $700 graphics card to play this game,” because we just don't know if that's true yet. It certainly could be – but that's not likely, and we'd recommend waiting a little longer to see how things shake out once release candidates are being pushed.

One more note: The memory capacity is because this thing is a dev box. Don't read too far into that. Development tools eat memory far-and-away beyond what a consumer software package should.

The cool part, though, is that we have some preliminary performance metrics from the Homestead demo, and we know what hardware was used to power those boxes.

Note -- we are awaiting correspondence on the resolution of the demo.
We have updated with the resolution -- 1080p for the demo.

- Steve “Lelldorianx” Burke.