Internet cafes and gaming centers probably aren’t a market segment most would recognize in the US, but they’re popular in other parts of the world--in particular, Asia--and ASUS seems to target that segment with the purpose-built Expedition A320M Gaming motherboard.
The entry-level AM4 board uses the low-end A320 chipset, and offers features that appear to identify with the rigors of crowded public places, such as iCafes and libraries. One such feature is the moisture-resistant coating on the motherboard, intended to protect against higher humidity environments. This is particularly useful in places like Taiwan, where humidity is high enough to cause corrosion on some components (that we’ve seen in person, no less). Additionally, the board has certain anti-theft features to help curb theft of memory modules and GPUs.
The AM5 Silent is a new case from manufacturer Sharkoon, with noise-damping material in place of the original AM5’s acrylic side window -- but it’s far from a new chassis.
After our Antec P8 review back in September, readers were quick to point-out that the chassis (meaning the steel core of the case) was curiously similar to the Silverstone Redline 05; in fact, it appears that they’re completely identical outside of the P8’s tempered glass and the RL05’s generously ventilated front panel.
Our Destiny 2 GPU benchmark highlighted massive performance uplift vs. beta on some devices, upwards of 50% on Vega, but was conducted in largely GPU-constrained scenarios. For this content piece, we’ll be exploring the opposite: CPU-constrained scenarios to benchmark Destiny 2 performance on AMD Ryzen and Intel Kaby/Coffee Lake parts, including the R7 1700, R5 1600X, R3 1200, and i7-7700K, i5-7600K, i3-8350K, and G4560.
Most of our test notes have already been recapped in the GPU benchmark, and won’t be fully repeated. Again, we ran a wide spread of tests during the beta, which will be informing our analysis for the Destiny 2 launch benchmarks. Find the previous content below:
As stated in the video intro, this benchmark contains some cool data that was exciting to work with. We don’t normally accumulate enough data to run historical trend plots across various driver or game revisions, but our initial Destiny 2 pre-launch benchmarks enabled us to compare that data against the game’s official launch. Bridging our pre-launch beta benchmarks with similar testing methods for the Destiny 2 PC launch, including driver changes, makes it easier to analyze the deviation between CPU, driver, and game code optimizations.
Recapping the previous tests, we already ran a wide suite of Destiny 2 benchmarks that included performance scaling tests in PvP multiplayer, campaign/co-op multiplayer, and various levels/worlds in the game. Find some of that content below:
- Destiny 2 Beta GPU Benchmark (+ graphics optimization guide, PvP scalability)
- Destiny 2 Beta CPU Benchmark (soon replaced by our Destiny 2 launch CPU bench)
- Destiny 2 texture comparison
NOTE: Our Destiny 2 CPU benchmark is now live.
Some of our original graphics optimization work also carried forward, allowing us to better pinpoint Depth of Field on Highest as one of the major culprits to AMD’s performance. This has changed somewhat with launch, as you’ll find below.
We’re sticking with FXAA for testing. Bungie ended up removing MSAA entirely, as the technique has been buggy since the beta, and left only SMAA and FXAA in its place.
AMD’s High-Bandwidth Cache Controller protocol is one of the keystones to the Vega architecture, marked by RTG lead Raja Koduri as a personal favorite feature of Vega, and highlighted in previous marketing materials as offering a potential 50% uplift in average FPS when in VRAM-constrained scenarios. With a few driver revisions now behind us, we’re revisiting our Vega 56 hybrid card to benchmark HBCC in A/B fashion, testing in memory-constrained scenarios to determine efficacy in real gaming workloads.
Wolfenstein II: The New Colossus is launching this Friday, and Bethesda have now published the final minimum and recommended specs. Bethesda is touting some PC-focused features like uncapped framerates (as we saw in the Destiny 2 beta, this can also mean “capped above 144”), choice of aspect ratio (4:3, 16:9, 16:10, or 21:9 ultrawide), an FOV slider (70-120), and 4K support.
The New Colossus will use the Vulkan API, following in the footsteps of the notoriously well-optimized DOOM reboot. In our DOOM testing more than a year ago, AMD’s RX 480 benefitted strongly from using Vulkan rather than OpenGL, as did NVIDIA’s 1080 to a lesser degree. Vega is specifically mentioned in this release, and Bethesda claims that with Vulkan they’ve been able to “utilize the power of AMD's Vega graphics chips in ways that were not possible before.” We’ll be publishing GPU tests as soon as possible.
From Bethesda’s site:
The latest report out of TrendForce and DRAMeXchange indicates that the already-high DRAM prices will continue to climb through 2018. Original shortages were accused of being fallout from impending Samsung and iPhone major launches this year, but new information points toward a slow-down in production out of the big three memory manufacturers (Samsung, Micron, SK Hynix). The three companies claim to be running R&D efforts for future technologies, but the fact that all three coincide does mean that each group can continue to enjoy exceptionally high margins into the future.
This Ask GN episode discusses tube orientation on radiators & coolers (top vs. bottom orientation) and why it matters, AIO headers on motherboards (like the Crosshair Hero VI), case testing methods, and streaming PC builds.
The last question is an interesting one, and one we've pondered for a bit: As we've shown in our streaming + gaming tests on a single system, there is potential that it'd make more sense to build two separate PCs, both of lower total cost, and run one of them as a standalone capture box. This takes more room (and probably more power), but would resolve concern of frametime variability on the player side and could potentially cost less than 8700K/R7 builds. We'll look into adding this to our test methods, but for now, we tackle the question in the video:
The Windows 10 Fall Creators Update (FCU) has reportedly provided performance uplift under specific usage scenarios, most of which center around GPU-bound scenarios with Vega 56 or similar GPUs. We know with relative certainty that FCU has improved performance stability and frametime consistency with adaptive synchronization technologies – Gsync and FreeSync, mostly – and that there may be general GPU-bound performance uplift. Some of this could come down to driver hooks and implementation in Windows, some of it could be GPU or arch-specific. What we haven’t seen much of is CPU-bound tests, attempting to isolate the CPU as the DUT for benchmarking.
These tests look at AMD Ryzen R7 1700 (stock) performance in Windows 10 Creator’s Update (build 1703, ending in 608) versus Windows 10 Fall Creators Update. Our testing can only speak for our testing, as always, and we cannot reasonably draw conclusions across the hardware stack with these benchmarks. The tests are representative of the R7 1700 in CPU-bound scenarios, created by using a GTX 1080 Ti FTW3. Because this is a 1080 Ti FTW3, we have two additional considerations for possible performance uplift (neither of which will be represented herein):
- - As an nVidia GPU, it is possible that driver/OS behavior will be different than with an AMD GPU
- - As a 1080 Ti FTW3, it is possible and likely that GPU-bound performance – which we aren’t testing – would exhibit uplift where this testing does not
Our results are not conclusive of the entirety of FCU, and cannot be used to draw wide-reaching conclusions about multiple hardware configurations. Our objective is to start pinpointing performance uplift, and from what combination of components that uplift can be derived. Most reports we have seen have spotted uplift with 1070 or Vega 56 GPUs, which would indicate GPU-bound performance increases (particularly because said reports show bigger gains at higher resolutions). We also cannot yet speak to performance change on Intel CPUs.
Our newest video leverages years of data to make a point about the case industry: Thermal testing isn't just to find a potential item of nitpicking or discussion -- it has actual ramifications in frequency response, power consumption/leakage, and even gaming performance. The current trend of case design has frighteningly spiraled into design trends that are actively worsening performance of systems. This is a regular cycle, to some extent, where the industry experiments with new design elements and trends -- like tempered glass and RGB lights -- and then culls the worst of the implementations. It's time for the industry to make its scheduled, pendulous swing back toward performance, though, and better accommodate thermals that prevent frequency decay on modern GPUs (which are sensitive to temperature swings).
This is a video-only format, for today. Although the content starts with a joke, the video makes use of charts from the past year or two of case testing that we've done, highlighting the most egregious instances of a case impacting performance of the entire system. We hope that the case manufacturers consider thermals with greater importance moving forward. The video makes the point, but also highlights that resolving poor case design with faster fans will negate any "silent" advantage that a case claims to offer. Find all of that below:
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.