Enermax's Liqtech TR4 liquid cooler took us by surprise in our 240mm unit review, and again in our Liqtech 360 TR4 review. The cooler is the first noteworthy closed-loop liquid cooler to accommodate Threadripper, and testing proved that it's not just smoke and mirrors: The extra coldplate size enables the Liqtech to overwhelm any of the current-market Asetek CLCs, which use smaller coldplates that are more suitable to Ryzen or Intel CPUs.
Early reports surrounding Vega GPU packaging indicated minimally two different package processes, though later revealed a potential third. For the two primary forms of Vega GPU packaging, we’re looking at clear, obvious differences in assembly: The silicon (GPU + HBM) is either encased in an epoxy resin (“molded”) or is not encased at all (“resinless”). There is another type of resinless package that has been shown online, but we haven’t yet encountered this third type.
The initial concern indicated that packaging process could impact HBM2 contact to cooler coldplates – something for which, after working on this content, we later discovered new data – and we wanted to test that mounting pressure. Just last night, days after we finalized this content piece, we found another data point that deserves a separate article, so be sure to check back for the follow-up to this piece.
In the meantime, we’re using a chemically reactive contact paper to test various Vega GPUs and vapor chambers or coolers, then swapping coolers between those various GPUs to try and understand if and when differences emerge. Some brief thermal testing also helps us validate whether those differences, which would theoretically be spurred-on by packaging variance, are actually relevant to thermal performance. Today, we’re testing to see the mounting pressure and thermal impact from AMD’s various Vega 56 & 64 GPU packages, with a brief resurrection of the Frontier Edition.
Note: We used torque drivers for the assembly, so that process was controlled for.
We’ve reviewed a lot of cases this year and have tested more than 100 configurations across our benchmark suite. We’ve seen some brilliant cases that have been marred by needless grasps at buzzwords, excellently designed enclosures that few talk about, and poorly designed cases that everyone talks about. Cases as a whole have gone through a lot of transformations this year, which should seem somewhat surprising, given that you’d think there are only so many ways to make a box. Today, we’re giving out awards for the best cases in categories of thermals, silence, design, overall quality, and more.
This awards show will primarily focus on the best cases that we’ve actually reviewed in the past year. If some case you like isn’t featured, it’s either because (A) we didn’t review it, or (B) we thought something else was better. It is impossible to review every single enclosure that is released annually; at least, it is impossible to do so without focusing all of our efforts on cases.
Here’s the shortlist:
Buildzoid of Actually Hardcore Overclocking recently joined us to explain what Load-Line Calibration is, and how LLC can be a useful tool for overclocking. LLC can also be dangerous to the life of the CPU if used carelessly, or when using the Extreme LLC setting without knowing fully how it works.
For anyone working on CPU overclocking and facing challenges with voltage stability, or anyone asking about Vdroop, LLC is a good place to start. LLC settings tuning should help stabilize voltage and prevent blasting the CPU with deadly Vcore. Learn more below:
Testing the Xbox One X for frametime and framerate performance marks an exciting step for GamersNexus. This is the first time we’ve been able to benchmark console frame pacing, and we’re doing so by deploying new, in-house software for analysis of lossless gameplay captures. At a very top-level, we’re analyzing the pixels temporally, aiming to determine whether there’s a change between frames. We then do some checks to validate those numbers, then some additional computational work to compute framerates and frametimes. That’s the simplest, most condensed version of what we’re doing. Our Xbox One X tear-down set the stage for this.
Outside of this, additional testing includes K-type thermocouple measurements from behind the APU (rear-side of the PCB), with more measurements from a logging plugload meter. The end result is an amalgamation of three charts, combining to provide a somewhat full picture of the Xbox One X’s gaming performance. As an aside, note that we discovered an effective Tcase Max of ~85C on the silicon surface, at which point the console shuts down. We were unable to force a shutdown during typical gameplay, but could achieve a shutdown with intentional torture of the APU thermals.
The Xbox One X uses an AMD Jaguar APU, which combines 40 CUs (4 more than an RX 480/580) at 1172MHz (~168MHz slower than an RX 580 Gaming X). The CPU component is an 8C processor (no SMT), and is the same as on previous Xbox One devices, just with a higher frequency of 2.3GHz. As for memory, the device is using 12GB of GDDR5 memory, all shared between the CPU and GPU. The memory operates an actual memory speed of 1700MHz, with memory bandwidth at 326GB/s. For point of comparison, an RX 580 offers about 256GB/s bandwidth. The Xbox One X, by all accounts, is an impressive combination of hardware that functionally equates a mid-range gaming PC. The PSU is another indication of this, with a 245W supply, at least a few watts of which are provided to the aggressive cooling solution (using a ~112mm radial fan).
Microsoft has, rather surprisingly, made it easy to get into and maintain the Xbox One X. The refreshed console uses just two screws to secure the chassis – two opposing, plastic jackets for the inner frame – and then uses serial numbering to identify the order of parts removal. For a console, we think the Xbox One X’s modularity of design is brilliant and, even if it’s just for Microsoft’s internal RMA purposes, it makes things easier for the enthusiast audience to maintain. We pulled apart the new Xbox One X in our disassembly process, walking through the VRM, APU, cooling solution, and overall construction of the unit.
Before diving in, a note on the specs: The Xbox One X uses an AMD Jaguar APU, to which is affixed an AMD Polaris GPU with 40 CUs. This CU count is greater than the RX 580’s 36 CUs (and so yields 2560 SPs vs. 2304 SPs), but runs at a lower clock speed. Enter our errata from the video: The clock speed of the integrated Polaris GPU in the Xbox One X is purportedly 1172MHz (some early claims indicated 1720MHz, but that proved to be the memory speed); at 1172MHz, the integrated Polaris GPU is about 100MHz slower than the original reference Boost of the RX 480, or about 168MHz slower than some of the RX 580 partner models. Consider this a correction of those numbers – we ended up citing the 1700MHz figure in the video, but that is actually incorrect; the correct figure is 1172MHz core, 1700MHz memory (6800MHz effective). The memory operates a 326GB/s bandwidth on its 384-bit bus. As for the rest, 40 CUs means 160 TMUs, giving a texture fill-rate of 188GT/s.
Since our delid collaboration with Bitwit, we’ve been considering expanding VRM temperature testing on the ASUS Rampage VI Extreme to determine at what point the VRM needs direct cooling. This expanded into determining when it’s even reasonable to expect the stock heatsink to be capable of handling the 7980XE’s overclocked heat load: We are seeking to find at what point we tip into territory of being too power-hungry to reasonably operate without a fan directly over the heatsink.
This VRM thermal benchmark specifically looks at the ASUS Rampage VI Extreme motherboard, which uses one of the better X299 heatsinks for its IR3555 60A power stages. The IR3555 has an internal temperature sensor, which ASUS taps into for a safety throttle in EFI. As we understand it, the stock configuration sets a VRM throttle temperature of 120C – we believe this is internal temperature, though the diode could also be placed between the FETs, in which case the internal temperatures would be higher.
Tripping VRM overtemperature isn’t something we do too often, but it happened when working on Bitwit Kyle’s 7980XE. We’re working on a “collab” with Kyle, as the cool kids call it, and delidded an i9-7980XE for Kyle’s upcoming $10,000 PC build. The delidded CPU underwent myriad thermal and power tests, including similar testing to our previous i9-7980XE delid & 7900X “thermal issues” content pieces. We also benchmarked sealant vs. no sealant (silicone adhesive vs. nothing), as all of our previous tests have been conducted without resealing the delidded CPUs – we just rest the IHS atop the CPU, then clamp it under the socket. For Kyle’s CPU, we’re going to be shipping it across the States, so that means it needs to not leak liquid metal everywhere. Part of this is resolved with nail polish on the SMDs, but the sealant – supposing no major thermal detriment – should also help.
Tripping overtemperature is probably the most unexpected side of our journey on this project. We figured we’d publish some data to demonstrate an overtemperature trip, and what happens when the VRMs exceed safe thermals, but the CPU is technically still under TjMax.
Let’s start with the VRM stuff first: This is a complete sideshoot discussion. We might expand it into a separate content piece with more testing, but we wanted to talk through some of the basics first. This is primarily observational data, at this point, though it was logged.
AMD’s High-Bandwidth Cache Controller protocol is one of the keystones to the Vega architecture, marked by RTG lead Raja Koduri as a personal favorite feature of Vega, and highlighted in previous marketing materials as offering a potential 50% uplift in average FPS when in VRAM-constrained scenarios. With a few driver revisions now behind us, we’re revisiting our Vega 56 hybrid card to benchmark HBCC in A/B fashion, testing in memory-constrained scenarios to determine efficacy in real gaming workloads.
The Windows 10 Fall Creators Update (FCU) has reportedly provided performance uplift under specific usage scenarios, most of which center around GPU-bound scenarios with Vega 56 or similar GPUs. We know with relative certainty that FCU has improved performance stability and frametime consistency with adaptive synchronization technologies – Gsync and FreeSync, mostly – and that there may be general GPU-bound performance uplift. Some of this could come down to driver hooks and implementation in Windows, some of it could be GPU or arch-specific. What we haven’t seen much of is CPU-bound tests, attempting to isolate the CPU as the DUT for benchmarking.
These tests look at AMD Ryzen R7 1700 (stock) performance in Windows 10 Creator’s Update (build 1703, ending in 608) versus Windows 10 Fall Creators Update. Our testing can only speak for our testing, as always, and we cannot reasonably draw conclusions across the hardware stack with these benchmarks. The tests are representative of the R7 1700 in CPU-bound scenarios, created by using a GTX 1080 Ti FTW3. Because this is a 1080 Ti FTW3, we have two additional considerations for possible performance uplift (neither of which will be represented herein):
- - As an nVidia GPU, it is possible that driver/OS behavior will be different than with an AMD GPU
- - As a 1080 Ti FTW3, it is possible and likely that GPU-bound performance – which we aren’t testing – would exhibit uplift where this testing does not
Our results are not conclusive of the entirety of FCU, and cannot be used to draw wide-reaching conclusions about multiple hardware configurations. Our objective is to start pinpointing performance uplift, and from what combination of components that uplift can be derived. Most reports we have seen have spotted uplift with 1070 or Vega 56 GPUs, which would indicate GPU-bound performance increases (particularly because said reports show bigger gains at higher resolutions). We also cannot yet speak to performance change on Intel CPUs.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.