Steve Burke

Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

This newest episode of Ask GN furthers out latest video discussion where we demonstrated how to kill a motherboard VRM with zealous overclocking, found here. The continuation of this discussion (in Ask GN) starts with questions positing whether it's possible to damage GPU or CPU components from overdone undervolting, and then follows-up by asking about whether overclocking just memory (on a GPU) can somehow "hurt" the memory.

In addition to this, we take questions on GPU heatsink damage, liquid metal ongoing maintenance and nail polish/electrical tape application for SMD protection, personal triumphs/failures in hardware, and case airflow. Floatplane is also briefly mentioned.

More below.

EK Waterblocks makes some of our favorite quick release valves, but their previous attempt at a semi-open loop liquid cooler – the EK Predator – terminated after an overwhelming amount of issues with leakage. It was a shame, too, because the Predator was one of the best-performing coolers we’d tested for noise-normalized performance. Ultimately, if it can’t hold water, it’s all irrelevant.

EK is attempting to redeem themselves with the modular, semi-open approach set-forth with the new EK-MLC Phoenix series. A viewer recently loaned us the EK-MLC Phoenix 360mm cooler and Intel CPU block ($200 for the former, $80 for the latter), which we immediately put to work on the bench. This review looks at the EK-MLC Phoenix 360mm radiator and CPU cooling block, primarily contending against closed-loop liquid coolers (like the H150i Pro and X62) and EK's own Fluid Gaming line.

CPUs with integrated graphics always make memory interesting. Memory’s commoditization, ignoring recent price trends, has made it an item where you sort of pick what’s cheap and just buy it. With something like AMD’s Raven Ridge APUs, that memory choice could have a lot more impact than a budget gaming PC with a discrete GPU. We’ll be testing a handful of memory kits with the R5 2400G in today’s content, including single- versus dual-channel testing where all timings have been equalized. We’re also testing a few different motherboards with the same kit of memory, useful for determining how timings change between boards.

We’re splitting these benchmarks into two sections: First, we’ll show the impact of various memory kits on performance when tested on a Gigabyte Gaming K5 motherboard, and we’ll then move over to demonstrate how a few popular motherboards affect results when left to auto XMP timings. We are focusing on memory scalability performance today, with a baseline provided by the G4560 and R3 GT1030 tests we ran a week ago. We’ll get to APU overclocking in a future content piece. For single-channel testing, we’re benchmarking the best kit – the Trident Z CL14 3200MHz option – with one channel in operation.

Keep in mind that this is not a straight frequency comparison, e.g. not a 2400MHz vs. 3200MHz comparison. That’s because we’re changing timings along with the kits; basically, we’re looking at the whole picture, not just frequency scalability. The idea is to see how XMP with stock motherboard timings (where relevant) can impact performance, not just straight frequency with controls, as that is likely how users would be installing their systems.

We’ll show some of the memory/motherboard auto settings toward the end of the content.

The latest Ask GN brings us to episode #70. We’ve been running this series for a few years now, but the questions remain top-notch. For this past week, viewers asked about nVidia’s “Ampere” and “Turing” architectures – or the rumored ones, anyway – and what we know of the naming. For other core component questions, Raven Ridge received a quick note on out-of-box motherboard support and BIOS flashing.

Non-core questions pertained to cooling, like the “best” CLCs when normalizing for fans, or hybrid-cooled graphics VRM and VRAM temperatures. Mousepad engineering got something of an interesting sideshoot, for which we recruited engineers at Logitech for insight on mouse sensor interaction with surfaces.

More at the video below, or find our Patreon special here.

As part of our new and ongoing “Bench Theory” series, we are publishing a year’s worth of internal-only data that we’ve used to drive our 2018 GPU test methodology. We haven’t yet implemented the 2018 test suite, but will be soon. The goal of this series is to help viewers and readers understand what goes into test design, and we aim to underscore the level of accuracy that GN demands for its publication. Our first information dump focused on benchmark duration, addressing when it’s appropriate to use 30-second runs, 60-second runs, and more. As we stated in the first piece, we ask that any content creators leveraging this research in their own testing properly credit GamersNexus for its findings.

Today, we’re looking at standard deviation and run-to-run variance in tested games. Games on bench cycle regularly, so the purpose is less for game-specific standard deviation (something we’re now addressing at game launch) and more for an overall understanding of how games deviate run-to-run. This is why conducting multiple, shorter test passes (see: benchmark duration) is often preferable to conducting fewer, longer passes; after all, we are all bound by the laws of time.

Looking at statistical dispersion can help understand whether a game itself is accurate enough for hardware benchmarks. If a game is inaccurate or varies wildly from one run to the next, we have to look at whether that variance is driver-, hardware-, or software-related. If it’s just the game, we must then ask the philosophical question of whether it’s the game we’re testing, or if it’s the hardware we’re testing. Sometimes, testing a game that has highly variable performance can still be valuable – primarily if it’s a game people want to play, like PUBG, despite having questionable performance. Other times, the game should be tossed. If the goal is a hardware benchmark and a game is behaving in outlier fashion, and also largely unplayed, then it becomes suspect as a test platform.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge