Ubisoft's newest dystopian efforts start strong with allusions to modern-day challenges pertaining to privacy and "cyber warfare," working to build-up our character as a counter-culture hacker. And, as with Ubisoft's other AAA titles, the game builds this world with high-resolution textures, geometrically complex and dense objects, taxing shadow/lighting systems, and an emphasis on graphics quality.
Watch Dogs 2 is a demanding title to run on modern hardware. We spent the first 1-2 hours of our time in Watch Dogs 2 simply studying the impact of various settings on performance, further studying locales and their performance hits. Areas with grass and foliage, we found, most heavily hit framerate. Nightfall or dark rain play a role in FPS hits, too, particularly when running high reflection qualities and headlight shadows.
We look at performance of 11 GPUs in this Watch Dogs 2 video card benchmark, including the RX 480 vs. GTX 1060, GTX 1070, GTX 1080, RX 470, R9 Fury X, and more.
We've been through Battlefield 1 a few times now. First were the GPU benchmarks, then the HBAO vs. SSAO benchmark, then the CPU benchmark. This time it's RAM, and the methodology remains mostly the same. Note that these results are not comparable to previous results because (1) the game has received updates, (2) memory spec has changed for this test, and (3) we have updated our graphics drivers. The test platforms and memory used are dynamic for this test, the rest remaining similar to what we've done in the past. That'll be defined in the methodology below.
Our CPU benchmark had us changing frequencies between test platforms as we tried to determine our test patterns and methodology / bench specs for the endeavor. During that exploratory process, we noticed that memory speeds of 3200MHz were measurably faster in heuristic testing than speeds of, say, 2400MHz. That was just done by eye, though; it wasn't an official benchmark, and we wanted to dedicate a separate piece to that.
This content benchmarks memory performance in Battlefield 1, focusing on RAM speed (e.g. 1600MHz, 1866, 2133, 2400, so forth) and capacity. We hope to answer whether 8GB is "enough" and find a sweet spot for price-performance in memory selection.
This benchmark took a while to complete. We first started benchmarking CPUs with Battlefield 1 just after our GPU content was published, but ran into questions that took some back-and-forth with AMD to sort out. Some of that conversation will be recapped here.
Our Battlefield 1 CPU benchmark is finally complete. We tested most of the active Skylake suite (i7-6700K down to i3-6300), the FX-8370, -8320E, and some Athlon CPUs. Unfortunately, we ran out of activations before getting to Haswell or last-gen CPUs, but those may be visited at some point in the future. Our next goal is to look into the impact of memory speed on BF1 performance, or determine if there is any at all.
Back on track, though: Today's feature piece is to determine at what point a CPU will begin bottlenecking performance elsewhere in the system when playing Battlefield 1. Our previous two content pieces related to Battlefield 1 are linked below:
The goal of this content is to show that HBAO and SSAO have negligible performance impact on Battlefield 1 performance when choosing between the two. This benchmark arose following our Battlefield 1 GPU performance analysis, which demonstrated consistent frametimes and frame delivery on both AMD and nVidia devices when using DirectX 11. Two of our YouTube commenters asked if HBAO would create a performance swing that would favor nVidia over AMD and, although we've discussed this topic with several games in the past, we decided to revisit for Battlefield 1. This time, we'll also spend a bit of time defining what ambient occlusion actually is, how screen-space occlusion relies on information strictly within the z-buffer, and then look at performance cost of HBAO in BF1.
We'd also recommend our previous graphics technology deep-dive, for folks who want a more technical explanation of what's going on for various AO technologies. Portions of this new article exist in the deep-dive.
Battlefield 1 marks the arrival of another title with DirectX 12 support – sort of. The game still supports DirectX 11, and thus Windows 7 and 8, but makes efforts to shift Dice and EA toward the new world of low-level APIs. This move comes at a bit of a cost, though; our testing of Battlefield 1 has uncovered some frametime variance issues on both nVidia and AMD devices, resolvable by reverting to DirectX 11. We'll explore that in this content.
In today's Battlefield 1 benchmark, we're strictly looking at GPU performance using DirectX 12 and DirectX 11, including the recent RX 400 series, GTX 10 series, GTX 9 series, and RX 300 series GPUs. Video cards tested include the RX 480, RX 470, RX 460, 390X, and Fury X from AMD and the GTX 1080, 1070, 1060, 970, and 960 from nVidia. We've got a couple others in there, too. We may separately look at CPU performance, but not today.
This BF1 benchmark bears with it extensive testing methodology, as always, and that's been fully detailed within the methodology section below. Please be sure that you check this section for any questions as to drivers, test tools, measurement methodology, or GPU choices. Note also that, as with all Origin titles, we were limited to five device changes per game code per day (24 hours). We've got three codes, so that allowed us up to 15 total device tests within our test period.
This is the last of our CitizenCon coverage from Sunday. Following this interview with Erin Roberts, we flew up to San Jose to tour a few of the hardware manufacturers located in the area. We'll likely have some coverage of those visits online within the next 2-3 days, for folks looking for a return to hardware industry discussion and architecture dives. A few reviews are also pending publication, likely going live next week.
We try to focus on technology at GN, as always, and so spent our previous interview talking about parallax occlusion mapping and 64-bit engine technology. We think that this is interesting and useful information to learn to better understand GPU interactions with game engines, hopefully better painting a picture of what's going on behind the GPU shroud. With Erin Roberts, Studio Head for the Foundry 42 UK branch, we discussed procedural generation edge blending for planets v2 (also defined here), development timelines to build the demo shown, and workload distribution between studios.
Immediately following our already-published interview with Star Citizen's Chris Roberts, we encountered Technical Director Sean Tracy, recently responsible for educating us on the game's 64-bit engine. The Technical Director took a few moments after CitizenCon to share details about the lower level technology driving the night's demonstration, like real-time physics, poly per pixel budgets, occlusion mapping, and more.
Tracy's role for the CitizenCon presentation primarily had him demonstrating the production tools utilized by CIG for planet development. This includes Planet Ed, already somewhat detailed here and here, which is the team's creation kit for planet customization and design. The toolkit takes the approach of getting artists “90% of the way there,” allowing them to fine-tune the final 10% for faster production pipelines and a hands-on polish. The tech demo showed a walk-through of CIG's team using large brushes to paint the surface with biomes, hand-placing bodies of water and buildings, and blending everything together.
CitizenCon 2016 included the biggest technological demonstrations that the game has publicly shown to-date, including fully functional procedural generation 2.0 for planets, real-time spring physics, and authoring tools. The technology suite was detailed in two of our recent interviews with Sean Tracy and Chris Roberts, but was overshadowed in some ways by statements given regarding a potential Squadron 42 demonstration for CitizenCon 2016.
Viewers of the stream (or our content) will know that Squadron 42 didn't make it into the live demonstration, already packed with hours of discussion on Spectrum (comms systems), procedural generation, authoring tools, and roadmaps. The demonstration was technologically and graphically impressive, but Squadron 42 was left out for polish and refinement reasons. Roberts indirectly referenced our interview on stage, stating that, “I gave an interview about 3 weeks ago and probably spoke too soon,” but continued that he hoped the planet and tools demonstrations would make up for this.
In our post-event interview, we spoke with CIG CEO Chris Roberts on his thoughts regarding the event, the roadmap for Star Citizen, and Squadron 42's absence. Learn more in the video, or find a transcript below.
Leading into Star Citizen's annual “CitizenCon” event, held today, we received preliminary details from CIG CEO Chris Roberts and Technical Director Sean Tracy, both of whom heavily focused on an unveil of new procedural planets technology. The first interview (with Roberts) covered the top-level overview of procedural planet generation technology, with the third interview (with Tracy) focusing on the driving tools behind the planets.
But CitizenCon 2016 marks the first time we've seen those tools in action, unveiled on stage in front of an audience of more than 600 people live, with more tuned-in to the stream. Our earlier interviews suggested that the first major, complete Squadron 42 mission would be unveiled at CitizenCon alongside this “Planets V2” tech and character technology updates, but the plans changed in the weeks since that discussion. Planets V2 took the spotlight with its Homestead demo, Squadron 42 has been delayed to allow for quality improvement on the single-player demo.
Note: This was written live during the event for immediate publication. We are continuing to live update.
(7:50PM PT 10/9 - We are done live updating. We've added 4K screenshots to this article, all new from tonight. We've got interviews with Chris Roberts, Sean Tracy, and Erin Roberts going live shortly.)
Before getting to further discussion, a recap of the last two weeks:
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.