Destiny 2 Texture Quality Comparison

By Published August 31, 2017 at 12:01 am

As we’ve done in the past for GTA V and Watch_Dogs 2, we’re now taking a look at Destiny 2’s texture resolution settings. Our other recent Destiny 2 content includes our GPU benchmark and CPU benchmark.

All settings other than texture resolution were loaded from the highest preset and left untouched for these screenshots. There are five degrees of quality, but only highest, medium, and lowest are shown here to make differences more obvious. The blanks between can easily be filled in.

Our Destiny 2 GPU benchmark was conducted alongside our CPU benchmark, using many of the same learnings from our research for the GPU bench. For GPU testing, we found Destiny 2 to be remarkably consistent between multiplayer and campaign performance, scaling all the way down to a 1050 Ti. This remained true across the campaign, which performed largely identically across all levels, aside from a single level with high geometric complexity and heavy combat. We’ll recap some of that below.

For CPU benchmarking, GN’s Patrick Lathan used this research (starting one hour after the GPU bench began) to begin CPU tests. We ultimately found more test variance between CPUs – particularly at the low-end – when switching between campaign and multiplayer, and so much of this content piece will be dedicated to the research portion behind our Destiny 2 CPU testing. We cannot yet publish this as a definitive “X vs. Y CPU” benchmark, as we don’t have full confidence in the comparative data given Destiny 2’s sometimes nebulous behaviors.

For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:

The Destiny 2 beta’s arrival on PC provides a new benchmarking opportunity for GPUs and CPUs, and will allow us to plot performance uplift once the final game ships. Aside from being a popular beta, we also want to know if Bungie, AMD, and nVidia work to further improve performance in the final stretch of time prior to the official October 24 launch date. For now, we’re conducting an exploratory benchmark of multiplayer versus campaign test patterns for Destiny 2, quality settings, and multiple resolutions.

A few notes before beginning: This is beta, first off, and everything is subject to change. We’re ultimately testing this as it pertains to the beta, but using that experience to learn more about how Destiny 2 behaves so that we’re not surprised on its release. Some of this testing is to learn about settings impact to performance (including some unique behavior between “High” and “Highest”), multiplayer vs. campaign performance, and level performance. Note also that drivers will iterate and, although nVidia and AMD both recommended their respective drivers for this test (385.41, 17.8.2), likely change for final release. AMD in particular is in need of a more Destiny-specific driver, based on our testing, so keep in mind that performance metrics are in flux for the final launch.

Note also: Our Destiny 2 CPU benchmark will be up not long after this content piece. Keep an eye out for that one.

We’re revisiting an old topic. A few years ago, we posted an article entitled “How Many Watts Does a Gaming PC Really Need,” which focused on testing multiple configurations for power consumption. We started working on this revisit last week, using a soon-to-be-released Bronze 450W PSU as a baseline, seeing as we’ve recently advocated for more 400-450W PSUs in PC builds. We'll be able to share more about this PSU (and its creator and name) soon. This content piece shows how far we can get on lower wattage PSUs with modern hardware.

Although we’ll be showing an overclocked 7700K + GTX 1080 FTW as the high-end configuration, we’d recommend going higher than 450W for that particular setup. It is possible to run on 450W, but we begin pushing the continuous load on the PSU to a point of driving up noise levels (from the PSU fan) and abusing the power supply. There’s also insufficient headroom for 100% GPU / 100% CPU workloads, but that should be uncommon for most of our audience. Most the forum builds we see host PSUs ranging from 700-800W+, which is often overkill for most modern gaming PCs. You’d want the higher capacity for something like Threadripper, for instance, or X299, but those are HEDT platforms. For gaming platforms, power requirements largely stop around 600W, sans serious overclocking, and most systems can get by lower than that.

Since AMD’s high-core-count Ryzen lineup has entered the market, there seems to be an argument in every comment thread about multitasking and which CPUs handle it better. Our clean, controlled benchmarks don’t account for the demands of eighty browser tabs and Spotify running, and so we get constant requests to do in-depth testing on the subject. The general belief is that more threads are better able to handle more processes, a hypothesis that would increasingly favor AMD.

There are a couple reasons we haven’t included tests like these all along: first, “multitasking” means something completely different to every individual, and second, adding uncontrolled variables (like bloatware and network-attached software) makes tests less scientific. Originally, we hoped this article would reveal any hidden advantages that might emerge between CPUs when adding “multitasking” to the mix, but it’s ended up as a thorough explanation of why we don’t do benchmarks like this. We’re using the R3 1200 and G4560 to primarily run these trials.

This is the kind of testing we do behind-the-scenes to build a new test plan, but often don’t publish. This time, however, we’re publishing the trials of finding a multitasking benchmark that works. The point of publishing the trials is to demonstrate why it’s hard to trust “multitasking” tests, and why it’s hard to conduct them in a manner that’s representative of actual differences.

In listening to our community, we’ve learned that a lot of people seem to think Discord is multitasking, or that a Skype window is multitasking. Here’s the thing: If you’re running Discord and a game and you’re seeing an impact to “smoothness,” there’s something seriously wrong with the environment. That’s not even remotely close to enough of a workload to trouble even a G4560. We’re not looking at such a lightweight workload here, and we’re also not looking at the “I keep 100 tabs of Chrome open” scenarios, as that’s wholly unreliable given Chrome’s unpredictable caching and behaviors. What we are looking at is 4K video playback while gaming and bloatware while gaming.

In this piece, the word “multitasking” will be used to describe “running background software while gaming.” The term "bloatware" is being used loosely to easily describe an unclean operating system with several user applications running in the background.

Ask GN serves as a consistent format in our video series, now 57 episodes strong. We are still filming at a pace of roughly one Ask GN episode every 1-2 weeks, so if you’ve got questions, be sure to submit them in the YT comments section.

This week’s episode gives a brief break from the deeper overclocking, undervolting, and benchmarking topics of late. We briefly visit Pascal temperature response observations, a user’s chipped GPU (and our own tech battle scars), and talk monitor overclocking. The article referenced during the monitor OC section can be found here.

Creative Assembly has been busy with the Total War: Warhammer franchise lately. The second game of the planned trilogy is coming on September 28th, and in preparation a host of updates and bugfixes have been added to the original, as well as the new Norsca DLC faction.

One part of these updates was quietly replacing the default benchmark packaged with the game, which we’ve regularly included in our current cycle of CPU reviews. It was a short snippet of a battle between greenskin and Imperial armies, shot mostly from above, with some missile trails and artillery thrown in. Its advantages were that it was fairly CPU intensive, from a modern game that people are still interested in, and extremely easy to run (as it is automated).

Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.

There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.

Let’s start with prices, then talk architectural requirements.

Jon Peddie Research reports that the add-in board GPU market has increased 30.9% over last quarter and 34.9% year-to-year, largely thanks to the recent cryptocurrency mining craze.

Regardless of the exact numbers, it’s obvious to anyone that’s checked graphics card prices recently that something unusual is happening. JPR states that Q2 usually sees a “significant drop” in the market (average -9.8%), with the most action happening around the holiday season. This Q2, the market has increased for the first time in nine years. This is despite general PC market decline as demand for the industry’s bread-and-butter general purpose (non-gaming) PCs has dropped.

Hardware Sales: EVGA 450W PSU $10, HTC Vive $599

By
in Sales
Published August 23, 2017 at 10:12 pm

This week, some decent hardware and peripheral sales are ongoing and may help if looking to upgrade your current system or gaming setup. We found discounted prices on a 450W PSU from EVGA, the H100i V2 CLC from Corsair, a GTX 1080 from Gigabyte, the HTC Vive’s new discounted price, and a 120GB SSD from ADATA.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge