Hardware Guides

Frequency is the most advertised spec of RAM. As anyone who’s dug a little deeper knows, memory performance depends on timings as well--and not just the primary ones. We found this out the hard way while doing comparative testing for an article on extremely high frequency memory which refused to stabilize. We shelved that article indefinitely, but due to reader interest (thanks, John), we decided to explore memory subtimings in greater depth.

This content hopes to define memory timings and demystify the primary timings, including CAS (CL), tRAS, tRP, tRAS, and tRCD. As we define primary memory timings, we’ll also demonstrate how some memory ratios work (and how they sometimes can operate out of ratio), and how much tertiary and secondary timings (like tRFC) can impact performance. Our goal is to revisit this topic with a secondary and tertiary timings deep-dive, similar to this one.

We got information and advice from several memory and motherboard manufacturers in the course of our research, and we were warned multiple times about the difficulty of tackling this subject. On the one hand, it’s easy to get lost in minutiae, and on the other it’s easy to summarize things incorrectly. As ASUS told us, “you need to take your time on this one.” This is a general introduction, to be followed by another article with more detail on secondary and tertiary timings.

One of our Computex advertisers was CableMod, who are making a new vertical GPU mount that positions the video card farther back in the case, theoretically lowering thermals. We wanted to test this claim properly. It makes logical sense that a card positioned farther from the glass would operate cooler, but we wanted to test to what degree that’s true. Most vertical GPU mounts do fine for open loop cooling, but suffocate air-cooled cards by limiting the gap between the glass to less than an inch or two. The CableMod mount should push cards close to the motherboard, which has other interesting thermal characteristics that we’ll get into today.

We saw several cases at Computex that aim to move to rotating PCIe expansion slots, meaning that some future cases will accommodate GPUs positioned further toward the motherboard. Not all cases are doing this, leaving room for CableMod to compete, but it looks like Thermaltake and Cooler Master are moving this direction.

Manufacturing a single case can cost hundreds of thousands of dollars to design and develop, but the machinery used to make those cases costs millions of dollars. In a recent tour of Lian Li’s case manufacturing facility in Taiwan, we got to see first-hand the advanced and largely autonomous hydraulic presses, laser cutters, automatic shaping machines, and other equipment used to make a case. Some of these tools apply hundreds of thousands of pounds of force to case paneling, upwards of 1 million Newtons, and others will use high voltage to spot-weld pieces to aluminum paneling. Today, we’re walking through the start-to-finish process of how a case is made.

The first steps of case manufacturing at the Lian Li facility is to design the product. Once this process is done, CAD files go to Lian Li’s factory across the street to be turned into a case. In a simplified, canonical view of the manufacturing process, the first step is design, then raw materials and preparation of raw materials, followed by either a laser cutter for basic shapes or a press for tooled punch-outs, then washing, grinding, flattening, welding, and anodizing.

After seeing dozens of cases at Computex 2018, we’ve now rounded-up what we think are the best cases from the show, with the most interesting design elements, price points, or innovations. As always, wait until we can review these cases before getting too hyped and pre-ordering, but we wanted to at least point-out the top cases to pay attention to for the next year.

We’re calling this content the “Most Room for Improvement at Computex 2018” content piece. A lot of products this year are still prototypes, and so still have lots of time to improve and change. Many of the manufacturers have asked for feedback from media and will be making changes prior to launch, hopefully, but we wanted to share some of our hopes for improvement with all of you.

Separately, Linus of LinusTechTips joined us for the intro of this video, if that is of interest.

With B350, B360, Z370, Z390, X370, and Z490, we think it’s time to revisit an old topic answering what a chipset is. This is primarily to establish a point of why we need clarity on what each of these provides – there are a lot of chipsets with similar names, different socket types, and similar features. We’re here to define a chipset today in TLDR fashion, with a later piece to explain the actual chipset differences.

As for what a chipset actually is, this calls back to a GN article from 2012 – though we can do a better job now. The modern chipset is a glorified I/O controller, and can be thought of as the spinal cord of the computer, while the CPU is the disembodied brain. Intel calls its chipset a PCH, or Platform Controller Hub, while AMD just goes with the generic and appropriate term “chipset.” The chipset is the center of I/O for the rest of the motherboard, assigning I/O lanes to devices like SATA, gigabit ethernet, and USB ports.

Our colleagues at Hardware Canucks got a whole lot of hate for their video about switching back to Intel, to the point that it really shows the profound ignorance of being a blind fanboy of any product. We decided to run more in-depth tests of the same featureset as Dmitry, primarily for selfish reasons, though, as we’ve also been considering a new render machine build. If HWC’s findings were true, our plans of using an old 6900K would be meaningless in the face of a much cheaper CPU with an IGP.

For this testing, we’re using 32GB of RAM for all configurations (dual-channel for Z/X platforms and quad-channel for X399/X299). We’re also using an EVGA GTX 1080 Ti FTW3 for CUDA acceleration – because rendering without CUDA is torturously slow and we’re testing for real-world conditions.

Adobe recently added IGP-enabled acceleration to its Premiere video editing and creation software, which seems to leverage a component that is often irrelevant in our line of work – the on-die graphics processor. This move could potentially invalidate the rendering leverage provided by the likes of a 7980XE or 1950X, saving money for anyone who doesn’t need the additional threads for other types of work (like synchronous rendering or non-Premiere workstation tasks, e.g. Blender). Today, we’re benchmarking Adobe Premiere’s rendering speed on an Intel i7-8700X, AMD R7 2700X, Intel i9-7980XE, and AMD Threadripper 1950X.

Lapped AMD Ryzen IHS Thermal Results

By Published May 07, 2018 at 3:26 pm

In case you find it boring to watch an IHS get sanded for ten minutes, we’ve written-up this recap of our newest video. The content features a lapped AMD Ryzen APU IHS for the R3 2200G, which we previously delidded and later topped with a custom copper Rockit Cool IHS. For this next thermal benchmark, we sanded down the AMD Ryzen APU IHS with 600 grit, 1200 grit, 1500 grit, 2000 grit, and then 3000 grit (wet) to smooth-out the IHS surface. After this, we used a polishing rag and compound to further buff the IHS (not shown in the video, because it is exceptionally boring to watch), then we cleaned it and ran the new heatspreader through our standardized thermal benchmark.

For our 2700/2700X review, we wanted to see how Ryzen 2’s volt-frequency performance compared to Ryzen 1. We took our Ryzen 7 2700X and an R7 1700 and clocked them both to 4GHz, and then found the lowest possible voltage that would allow them to survive stress tests in Blender and Prime95. Full results are included in that review, but the most important point was this: the 1700 needed at least 1.425v to maintain stability, while the 2700X required only 1.162v (value reported by HWiNFO, not what was set in BIOS).

This drew our attention, because we already knew that our 2700X could barely manage 4.2GHz at >1.425v. In other words, a 5% increase in frequency from 4 to 4.2GHz required a 22.6% increase in reported voltage.

Frequency in Ryzen 2 has started to behave like GPU Boost 3.0, where temperature, power consumption, and voltage heavily impact boosting behavior when left unmanaged. Our initial experience with Ryzen 2 led us to believe that a volt-frequency curve would look almost exponential, like the one on the screen now. That was our hypothesis. To be clear, we can push frequency higher with reference clock increases to 102 or 103MHz and can then sustain 4.2GHz at lower voltages, or even 4.25GHz and up, but that’s not our goal. Our goal is to plot a volt-frequency curve with just multiplier and voltage modifications. We typically run out of thermal headroom before we run out of safe voltage headroom, but if voltage increases exponentially, that will quickly become a problem.

There’s a new trend in the industry: Heatsinks. Hopefully, anyway.

Gigabyte has listened to our never-ending complaints about VRM heatsinks and VRM thermals, and outfitted their X470 Gaming 7 motherboard with a full, proper fin stack and heatpipe. We’re happy to see it, and we hope that this trend continues, but it’s also not entirely necessary on this board. That doesn’t make us less excited to see an actual heatsink on a motherboard; however, we believe it does potentially point toward a future in higher core-count Ryzen CPUs. This is something that Buildzoid speculated in our recent Gaming 7 X470 VRM & PCB analysis. The amount of “overkill” power delivery capabilities on high-end X470 boards would suggest plans to support higher power consumption components from AMD.

Take the Gigabyte Gaming 7: It’s a 10+2-phase VRM, with the VCore VRM using IR3553s for 40A power stages. That alone is enough to run passive, but a heatsink drags temperature so far below requirements of operating spec that there’s room to spare. Cooler is always better in this instance (insofar as ambient cooling, anyway), so we can’t complain, but we can speculate about why it’s been done this way. ASUS’ Crosshair VII Hero has the same VRM, but with 60A power stages. That board, like Gigabyte’s, could run with no heatsink and be fine.

We tested with thermocouples placed on one top-side MOSFET, located adjacent to the SOC VRM MOSFETs (1.2V SOC), and one left-side MOSFET that’s centrally positioned. Our testing included stock and overclocked testing (4.2GHz/1.41VCore at Extreme LLC), then further tested with the heatsink removed entirely. By design, this test had no active airflow over the VRM components. Ambient was controlled during the test and was logged every second.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge