Singularity Computers, best known for their exorbitant, high-end custom PC builds, have steadily been expanding their brand into the landscape of PC hardware, with their latest foray being cases. Under a new subsidiary, Singularity Cases, Singularity is set to release their inaugural case: Spectre.
EK Waterblocks makes some of our favorite quick release valves, but their previous attempt at a semi-open loop liquid cooler – the EK Predator – terminated after an overwhelming amount of issues with leakage. It was a shame, too, because the Predator was one of the best-performing coolers we’d tested for noise-normalized performance. Ultimately, if it can’t hold water, it’s all irrelevant.
EK is attempting to redeem themselves with the modular, semi-open approach set-forth with the new EK-MLC Phoenix series. A viewer recently loaned us the EK-MLC Phoenix 360mm cooler and Intel CPU block ($200 for the former, $80 for the latter), which we immediately put to work on the bench. This review looks at the EK-MLC Phoenix 360mm radiator and CPU cooling block, primarily contending against closed-loop liquid coolers (like the H150i Pro and X62) and EK's own Fluid Gaming line.
CPUs with integrated graphics always make memory interesting. Memory’s commoditization, ignoring recent price trends, has made it an item where you sort of pick what’s cheap and just buy it. With something like AMD’s Raven Ridge APUs, that memory choice could have a lot more impact than a budget gaming PC with a discrete GPU. We’ll be testing a handful of memory kits with the R5 2400G in today’s content, including single- versus dual-channel testing where all timings have been equalized. We’re also testing a few different motherboards with the same kit of memory, useful for determining how timings change between boards.
We’re splitting these benchmarks into two sections: First, we’ll show the impact of various memory kits on performance when tested on a Gigabyte Gaming K5 motherboard, and we’ll then move over to demonstrate how a few popular motherboards affect results when left to auto XMP timings. We are focusing on memory scalability performance today, with a baseline provided by the G4560 and R3 GT1030 tests we ran a week ago. We’ll get to APU overclocking in a future content piece. For single-channel testing, we’re benchmarking the best kit – the Trident Z CL14 3200MHz option – with one channel in operation.
Keep in mind that this is not a straight frequency comparison, e.g. not a 2400MHz vs. 3200MHz comparison. That’s because we’re changing timings along with the kits; basically, we’re looking at the whole picture, not just frequency scalability. The idea is to see how XMP with stock motherboard timings (where relevant) can impact performance, not just straight frequency with controls, as that is likely how users would be installing their systems.
We’ll show some of the memory/motherboard auto settings toward the end of the content.
The past week has been abnormally packed with hardware news, with several heavy-hitter items from Intel and AMD partners alike. The headlining story highlights Intel's prototype dGPU unveil -- something that we won't see more of for years, if at all -- and talks Intel's initial plans for its dGPU component. This comes shortly after Intel's very public hiring of former RTG Chief Raja Koduri, who recently set to work on Intel's new dGPU division. It is likely that the prototype discussed has been in the works for a while, but Koduri's work will no doubt be visible in the coming years.
Other news items include the accidental publication of Intel Celeron CPUs by Newegg, including a new G49X0 series (G4920, G4900), and the non-K alternatives of the 8500 and 8600 i5 CPUs. For AMD, we saw news reports about an upcoming EKWB Threadripper Monoblock for MSI motherboards, which should be useful in full loop scenarios where the VRM thermals must be controlled. Several other news items are also present in this round-up. Find the show notes below.
PlayerUnknown’s Battlegrounds was officially released on PC this past December, but it’s been playable via Steam Early Access for nearly a year now. In all that time, none of us have played the game, despite many requests for benchmarks. Games that are in active development don’t make for easy testing, and neither do exclusively multiplayer games with tons of variance. Even Overwatch has the ability to play against bots.
Now that PUBG is 1.0 on PC and sort-of-released on Xbox, though, we have extra motivation to buckle down and start testing. We chose to start with the Xbox One X version, since the lack of graphics options makes things simpler. It’s listed as both 4K HDR ready and Xbox One X Enhanced, so our primary testing was done at 4K, with additional Xbox One X benchmarking at 1080p for PUBG. Technically, it’s a “Game Preview,” but the list of other titles in this category makes it look like something that was created expressly for PUBG. It also costs full PC price, $30.
This deep-dive looks at PUBG framerate and frametime performance (which is shockingly bad for a console), along with graphics analysis of the game’s visuals. Although the article covers testing and benchmarking in slightly more depth, we’d also strongly recommend watching the video, as it contains visual representation of what’s happening in-game.
The latest Ask GN brings us to episode #70. We’ve been running this series for a few years now, but the questions remain top-notch. For this past week, viewers asked about nVidia’s “Ampere” and “Turing” architectures – or the rumored ones, anyway – and what we know of the naming. For other core component questions, Raven Ridge received a quick note on out-of-box motherboard support and BIOS flashing.
Non-core questions pertained to cooling, like the “best” CLCs when normalizing for fans, or hybrid-cooled graphics VRM and VRAM temperatures. Mousepad engineering got something of an interesting sideshoot, for which we recruited engineers at Logitech for insight on mouse sensor interaction with surfaces.
More at the video below, or find our Patreon special here.
As part of our new and ongoing “Bench Theory” series, we are publishing a year’s worth of internal-only data that we’ve used to drive our 2018 GPU test methodology. We haven’t yet implemented the 2018 test suite, but will be soon. The goal of this series is to help viewers and readers understand what goes into test design, and we aim to underscore the level of accuracy that GN demands for its publication. Our first information dump focused on benchmark duration, addressing when it’s appropriate to use 30-second runs, 60-second runs, and more. As we stated in the first piece, we ask that any content creators leveraging this research in their own testing properly credit GamersNexus for its findings.
Today, we’re looking at standard deviation and run-to-run variance in tested games. Games on bench cycle regularly, so the purpose is less for game-specific standard deviation (something we’re now addressing at game launch) and more for an overall understanding of how games deviate run-to-run. This is why conducting multiple, shorter test passes (see: benchmark duration) is often preferable to conducting fewer, longer passes; after all, we are all bound by the laws of time.
Looking at statistical dispersion can help understand whether a game itself is accurate enough for hardware benchmarks. If a game is inaccurate or varies wildly from one run to the next, we have to look at whether that variance is driver-, hardware-, or software-related. If it’s just the game, we must then ask the philosophical question of whether it’s the game we’re testing, or if it’s the hardware we’re testing. Sometimes, testing a game that has highly variable performance can still be valuable – primarily if it’s a game people want to play, like PUBG, despite having questionable performance. Other times, the game should be tossed. If the goal is a hardware benchmark and a game is behaving in outlier fashion, and also largely unplayed, then it becomes suspect as a test platform.
We already have a dozen or so content pieces showing that delidding can improve thermal performance of Intel CPUs significantly, but we’ve always put the stock Intel IHS back in place. Today, we’re trying a $20 accessory – it’s a CNC-machined copper IHS from Rockit Cool, which purportedly increases surface area by 15% and smooths out points of contact. Intel’s stock IHS is a nickel-plated copper block, but is smaller in exposed surface area than the Rockit Cool alternative. The Intel IHS is also a non-flat surface – some coldplates are made concave to match the convex curvature of the Intel IHS (depending on your perspective of the heat spreader, granted), whereas the Rockit Cool solution is nearly perfectly flat. Most coolers have some slight conformity to mounting tension, flattening out coldplates atop a non-flat CPU IHS. For this reason and the increased surface (and contact) area, it was worth trying Rockit Cool’s solution.
At $14 to $20, this was worth trying. Today, we’re looking at if there’s any meaningful thermal improvement from a custom copper IHS for Intel CPUs, using an i7-8700K and Rockit Cool LGA115X heat spreader.
Our colleagues at Hardware Unboxed have posted a set of Intel thermal tests on the new 8th Generation CPUs, looking into thermal throttling and boost behavior under thermal load. This is similar to much of our work in case and cooler reviews, where we often demonstrate over-time plots of long burn-ins (30-120 minutes), useful for determining where a CPU may taper-off in clock speed. In Hardware Unboxed's testing, additional benchmarks were performed on the Intel stock cooler and two aftermarket coolers, used in a multitude of CPU benchmarks.
For our audience that has liked our thermal discussion pieces (which is most of them, these days), we think HUB's work is worth seeing. It's a good demonstration of where and when thermal throttling might occur on a CPU, and helps to address the question of less-than-ideal thermal conditions for CPU benchmarking.
Delidding the AMD R3 2200G wasn’t as clean as using pre-built tools for Intel CPUs, but we have a separate video that’ll show the delid process to expose the APU die. The new APUs use thermal paste, rather than AMD’s usual solder, which is likely a cost-saving measure for the low-end parts. We ran stock thermal tests on our 2200G using the included cooler and a 280mm X62 liquid cooler, then delidded it, applied Thermal Grizzly Conductonaut liquid metal, and ran the tests again. Today, we’re looking at that thermal test data to determine what sort of headroom we gain from the process.
Delidding the AMD R3 2200G is the same process as for the 2400G, and liquid metal application follows our same guidelines as for Intel CPUs. This isn’t something we recommend for the average user. As far as we’re aware, one of Der8auer’s delid kits does work for Raven Ridge, but we went the vise & razor route. This approach, as you might expect, is a bit riskier to the health of the APU. It wouldn’t be difficult to slide the knife too far and destroy a row of SMDs (surface-mount devices), so we’d advise not following our example unless willing to risk the investment.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.