Hardware Guides

Lapped AMD Ryzen IHS Thermal Results

By Published May 07, 2018 at 3:26 pm

In case you find it boring to watch an IHS get sanded for ten minutes, we’ve written-up this recap of our newest video. The content features a lapped AMD Ryzen APU IHS for the R3 2200G, which we previously delidded and later topped with a custom copper Rockit Cool IHS. For this next thermal benchmark, we sanded down the AMD Ryzen APU IHS with 600 grit, 1200 grit, 1500 grit, 2000 grit, and then 3000 grit (wet) to smooth-out the IHS surface. After this, we used a polishing rag and compound to further buff the IHS (not shown in the video, because it is exceptionally boring to watch), then we cleaned it and ran the new heatspreader through our standardized thermal benchmark.

For our 2700/2700X review, we wanted to see how Ryzen 2’s volt-frequency performance compared to Ryzen 1. We took our Ryzen 7 2700X and an R7 1700 and clocked them both to 4GHz, and then found the lowest possible voltage that would allow them to survive stress tests in Blender and Prime95. Full results are included in that review, but the most important point was this: the 1700 needed at least 1.425v to maintain stability, while the 2700X required only 1.162v (value reported by HWiNFO, not what was set in BIOS).

This drew our attention, because we already knew that our 2700X could barely manage 4.2GHz at >1.425v. In other words, a 5% increase in frequency from 4 to 4.2GHz required a 22.6% increase in reported voltage.

Frequency in Ryzen 2 has started to behave like GPU Boost 3.0, where temperature, power consumption, and voltage heavily impact boosting behavior when left unmanaged. Our initial experience with Ryzen 2 led us to believe that a volt-frequency curve would look almost exponential, like the one on the screen now. That was our hypothesis. To be clear, we can push frequency higher with reference clock increases to 102 or 103MHz and can then sustain 4.2GHz at lower voltages, or even 4.25GHz and up, but that’s not our goal. Our goal is to plot a volt-frequency curve with just multiplier and voltage modifications. We typically run out of thermal headroom before we run out of safe voltage headroom, but if voltage increases exponentially, that will quickly become a problem.

There’s a new trend in the industry: Heatsinks. Hopefully, anyway.

Gigabyte has listened to our never-ending complaints about VRM heatsinks and VRM thermals, and outfitted their X470 Gaming 7 motherboard with a full, proper fin stack and heatpipe. We’re happy to see it, and we hope that this trend continues, but it’s also not entirely necessary on this board. That doesn’t make us less excited to see an actual heatsink on a motherboard; however, we believe it does potentially point toward a future in higher core-count Ryzen CPUs. This is something that Buildzoid speculated in our recent Gaming 7 X470 VRM & PCB analysis. The amount of “overkill” power delivery capabilities on high-end X470 boards would suggest plans to support higher power consumption components from AMD.

Take the Gigabyte Gaming 7: It’s a 10+2-phase VRM, with the VCore VRM using IR3553s for 40A power stages. That alone is enough to run passive, but a heatsink drags temperature so far below requirements of operating spec that there’s room to spare. Cooler is always better in this instance (insofar as ambient cooling, anyway), so we can’t complain, but we can speculate about why it’s been done this way. ASUS’ Crosshair VII Hero has the same VRM, but with 60A power stages. That board, like Gigabyte’s, could run with no heatsink and be fine.

We tested with thermocouples placed on one top-side MOSFET, located adjacent to the SOC VRM MOSFETs (1.2V SOC), and one left-side MOSFET that’s centrally positioned. Our testing included stock and overclocked testing (4.2GHz/1.41VCore at Extreme LLC), then further tested with the heatsink removed entirely. By design, this test had no active airflow over the VRM components. Ambient was controlled during the test and was logged every second.

Real-Time Ray Tracing Explained

By Published April 06, 2018 at 3:54 pm

Recent advancements in graphics processing technology have permitted software and hardware vendors to collaborate on real-time ray tracing, a long-standing “holy grail” of computer graphics. Ray-tracing has been used for a couple of decades now, but has always been used in pre-rendered graphics – often in movies or other video playback that doesn’t require on-the-fly processing. The difference with going real-time is that we’re dealing with sparse data, and making fewer rays look good (better than standard rasterization, especially) is difficult.

NVidia has been beating this drum for a few years now. We covered nVidia’s ray-tracing keynote at ECGC a few years ago, when the company’s Tony Tamasi projected 2015 as the year for real-time ray-tracing. That obviously didn’t fully realize, but the company wasn’t too far off. Volta ended up providing some additional leverage to make 60FPS, real-time ray-tracing a reality. Even still, we’re not quite there with consumer hardware. Epic Games and nVidia have been demonstrating real-time ray-tracing rendering with four Titan V100 GPUs lately, functionally $12,000 worth of Titan Vs, and that’s to achieve a playable real-time framerate with the ubiquitous “Star Wars” demo.

As we remarked back when we reviewed the i5-8400, launched on its lonesome and without low-end motherboard support, the Intel i5-8400 makes most sense when paired with B360 or H370 motherboards. Intel launched the i5-8400 and other non-K CPUs without that low-end chipset support, though, leaving only the Z370 enthusiast board on the frontlines with the locked CPUs.

When it comes to Intel chipset differences, the main point of comparison between B, H, and Z chipsets would be HSIO lanes – or high-speed I/O lanes. HSIO lanes are Intel-assigned per chipset, with each chipset receiving a different count of HSIO lanes. High-speed IO lanes can be assigned somewhat freely by the motherboard manufacturer, and are isolated from the graphics PCIe lanes that each CPU independently possesses. The HSIO lanes are as detailed below for the new 8th Generation Coffee Lake chipsets:

Multi-core enhancement is an important topic that we’ve discussed before – right after the launch of the 8700K, most recently. It’ll become even more important over the next few weeks, and that’s for a few reasons: For one, Intel is launching its new B and H chipsets soon, and that’ll require some performance testing. For two, AMD is launching its Ryzen 2000 series chips on April 19th, and those will include XFR2. Some X470 motherboards, just like some X370 motherboards, have MCE equivalent options. For Intel and AMD both, enabling MCE means running outside of power specification, and therefore thermal spec of low-end coolers, and also running higher clocks than the stock configuration. The question is if any motherboard vendors enable MCE by default, or silently, because that’s where results can become muddy for buyers.

As noted, this topic is most immediately relevant for impending B & H series chipset testing – if recent leaks are to be believed, anyway. This is also relevant for upcoming Ryzen 2 CPUs, like the 2700X and kin, for their inclusion of XFR2 and similar boosting features. In today’s content, we’re revisiting MCE and Core Performance Boost on AMD CPUs, demonstrating the differences between them (and an issue with BIOS revision F2 on the Ultra Gaming).

Consoles don’t offer many upgrade paths, but HDDs, like the ones that ship in the Xbox One X, are one of the few items that can be exchanged for a standard part with higher performance. Since 2013, there have been quite a few benchmarks done with SSDs vs. HDDs in various SKUs of the Xbox One, but not so many with the Xbox One X--so we’re doing our own. We’ve seen some abysmal load times in Final Fantasy and some nasty texture loading in PUBG, so there’s definitely room for improvement somewhere.

The 1TB drive that was shipped in our Xbox One X is a Seagate 2.5” hard drive, model ST1000LM035. This is absolutely, positively a 5400RPM drive, as we said in our teardown, and not a 7200RPM drive (as some suggest online). Even taking the 140MB/s peak transfer rate listed in the drive’s data sheet completely at face value, it’s nowhere near bottlenecking on the internal SATA III interface. The SSD is up against SATA III (or USB 3.0 Gen1) limitations, but will still give us a theoretical sequential performance uplift of 4-5x -- and that’s assuming peak bursted speeds on the hard drive.

This benchmark tests game load times on an external SSD for the Xbox One X, versus internal HDD load times for Final Fantasy XV (FFXV), Monster Hunter World, PUBG (incl. texture pop-in), Assassin's Creed: Origins, and more.

Even when using supposed “safe” voltages as a maximum input limit for overclocking via BIOS, it’s possible that the motherboard is feeding a significantly different voltage to the CPU. We’ve demonstrated this before, like when we talked about the Ultra Gaming’s Vdroop issues. The opposite side of Vdroop would be overvoltage, of course, and is also quite common. Inputting a value of 1.3V SOC, for instance, could yield a socket-side voltage measurement of ~1.4V. This difference is significant enough that you may exit territory of being “reasonably usable” and enter “will definitely degrade the IMC over time.”

But software measurements won’t help much, in this regard. HWINFO is good, AIDA also does well, but both are relying on the CPU sensors to deliver that information. The pin/pad resistances alone can cause that number to underreport in software, whereas measuring the back of the socket with a digital multimeter (DMM) could tell a very different story.

CPUs with integrated graphics always make memory interesting. Memory’s commoditization, ignoring recent price trends, has made it an item where you sort of pick what’s cheap and just buy it. With something like AMD’s Raven Ridge APUs, that memory choice could have a lot more impact than a budget gaming PC with a discrete GPU. We’ll be testing a handful of memory kits with the R5 2400G in today’s content, including single- versus dual-channel testing where all timings have been equalized. We’re also testing a few different motherboards with the same kit of memory, useful for determining how timings change between boards.

We’re splitting these benchmarks into two sections: First, we’ll show the impact of various memory kits on performance when tested on a Gigabyte Gaming K5 motherboard, and we’ll then move over to demonstrate how a few popular motherboards affect results when left to auto XMP timings. We are focusing on memory scalability performance today, with a baseline provided by the G4560 and R3 GT1030 tests we ran a week ago. We’ll get to APU overclocking in a future content piece. For single-channel testing, we’re benchmarking the best kit – the Trident Z CL14 3200MHz option – with one channel in operation.

Keep in mind that this is not a straight frequency comparison, e.g. not a 2400MHz vs. 3200MHz comparison. That’s because we’re changing timings along with the kits; basically, we’re looking at the whole picture, not just frequency scalability. The idea is to see how XMP with stock motherboard timings (where relevant) can impact performance, not just straight frequency with controls, as that is likely how users would be installing their systems.

We’ll show some of the memory/motherboard auto settings toward the end of the content.

As part of our new and ongoing “Bench Theory” series, we are publishing a year’s worth of internal-only data that we’ve used to drive our 2018 GPU test methodology. We haven’t yet implemented the 2018 test suite, but will be soon. The goal of this series is to help viewers and readers understand what goes into test design, and we aim to underscore the level of accuracy that GN demands for its publication. Our first information dump focused on benchmark duration, addressing when it’s appropriate to use 30-second runs, 60-second runs, and more. As we stated in the first piece, we ask that any content creators leveraging this research in their own testing properly credit GamersNexus for its findings.

Today, we’re looking at standard deviation and run-to-run variance in tested games. Games on bench cycle regularly, so the purpose is less for game-specific standard deviation (something we’re now addressing at game launch) and more for an overall understanding of how games deviate run-to-run. This is why conducting multiple, shorter test passes (see: benchmark duration) is often preferable to conducting fewer, longer passes; after all, we are all bound by the laws of time.

Looking at statistical dispersion can help understand whether a game itself is accurate enough for hardware benchmarks. If a game is inaccurate or varies wildly from one run to the next, we have to look at whether that variance is driver-, hardware-, or software-related. If it’s just the game, we must then ask the philosophical question of whether it’s the game we’re testing, or if it’s the hardware we’re testing. Sometimes, testing a game that has highly variable performance can still be valuable – primarily if it’s a game people want to play, like PUBG, despite having questionable performance. Other times, the game should be tossed. If the goal is a hardware benchmark and a game is behaving in outlier fashion, and also largely unplayed, then it becomes suspect as a test platform.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge