Hardware Guides

Manufacturing a single case can cost hundreds of thousands of dollars to design and develop, but the machinery used to make those cases costs millions of dollars. In a recent tour of Lian Li’s case manufacturing facility in Taiwan, we got to see first-hand the advanced and largely autonomous hydraulic presses, laser cutters, automatic shaping machines, and other equipment used to make a case. Some of these tools apply hundreds of thousands of pounds of force to case paneling, upwards of 1 million Newtons, and others will use high voltage to spot-weld pieces to aluminum paneling. Today, we’re walking through the start-to-finish process of how a case is made.

The first steps of case manufacturing at the Lian Li facility is to design the product. Once this process is done, CAD files go to Lian Li’s factory across the street to be turned into a case. In a simplified, canonical view of the manufacturing process, the first step is design, then raw materials and preparation of raw materials, followed by either a laser cutter for basic shapes or a press for tooled punch-outs, then washing, grinding, flattening, welding, and anodizing.

After seeing dozens of cases at Computex 2018, we’ve now rounded-up what we think are the best cases from the show, with the most interesting design elements, price points, or innovations. As always, wait until we can review these cases before getting too hyped and pre-ordering, but we wanted to at least point-out the top cases to pay attention to for the next year.

We’re calling this content the “Most Room for Improvement at Computex 2018” content piece. A lot of products this year are still prototypes, and so still have lots of time to improve and change. Many of the manufacturers have asked for feedback from media and will be making changes prior to launch, hopefully, but we wanted to share some of our hopes for improvement with all of you.

Separately, Linus of LinusTechTips joined us for the intro of this video, if that is of interest.

With B350, B360, Z370, Z390, X370, and Z490, we think it’s time to revisit an old topic answering what a chipset is. This is primarily to establish a point of why we need clarity on what each of these provides – there are a lot of chipsets with similar names, different socket types, and similar features. We’re here to define a chipset today in TLDR fashion, with a later piece to explain the actual chipset differences.

As for what a chipset actually is, this calls back to a GN article from 2012 – though we can do a better job now. The modern chipset is a glorified I/O controller, and can be thought of as the spinal cord of the computer, while the CPU is the disembodied brain. Intel calls its chipset a PCH, or Platform Controller Hub, while AMD just goes with the generic and appropriate term “chipset.” The chipset is the center of I/O for the rest of the motherboard, assigning I/O lanes to devices like SATA, gigabit ethernet, and USB ports.

Our colleagues at Hardware Canucks got a whole lot of hate for their video about switching back to Intel, to the point that it really shows the profound ignorance of being a blind fanboy of any product. We decided to run more in-depth tests of the same featureset as Dmitry, primarily for selfish reasons, though, as we’ve also been considering a new render machine build. If HWC’s findings were true, our plans of using an old 6900K would be meaningless in the face of a much cheaper CPU with an IGP.

For this testing, we’re using 32GB of RAM for all configurations (dual-channel for Z/X platforms and quad-channel for X399/X299). We’re also using an EVGA GTX 1080 Ti FTW3 for CUDA acceleration – because rendering without CUDA is torturously slow and we’re testing for real-world conditions.

Adobe recently added IGP-enabled acceleration to its Premiere video editing and creation software, which seems to leverage a component that is often irrelevant in our line of work – the on-die graphics processor. This move could potentially invalidate the rendering leverage provided by the likes of a 7980XE or 1950X, saving money for anyone who doesn’t need the additional threads for other types of work (like synchronous rendering or non-Premiere workstation tasks, e.g. Blender). Today, we’re benchmarking Adobe Premiere’s rendering speed on an Intel i7-8700X, AMD R7 2700X, Intel i9-7980XE, and AMD Threadripper 1950X.

Lapped AMD Ryzen IHS Thermal Results

By Published May 07, 2018 at 3:26 pm

In case you find it boring to watch an IHS get sanded for ten minutes, we’ve written-up this recap of our newest video. The content features a lapped AMD Ryzen APU IHS for the R3 2200G, which we previously delidded and later topped with a custom copper Rockit Cool IHS. For this next thermal benchmark, we sanded down the AMD Ryzen APU IHS with 600 grit, 1200 grit, 1500 grit, 2000 grit, and then 3000 grit (wet) to smooth-out the IHS surface. After this, we used a polishing rag and compound to further buff the IHS (not shown in the video, because it is exceptionally boring to watch), then we cleaned it and ran the new heatspreader through our standardized thermal benchmark.

For our 2700/2700X review, we wanted to see how Ryzen 2’s volt-frequency performance compared to Ryzen 1. We took our Ryzen 7 2700X and an R7 1700 and clocked them both to 4GHz, and then found the lowest possible voltage that would allow them to survive stress tests in Blender and Prime95. Full results are included in that review, but the most important point was this: the 1700 needed at least 1.425v to maintain stability, while the 2700X required only 1.162v (value reported by HWiNFO, not what was set in BIOS).

This drew our attention, because we already knew that our 2700X could barely manage 4.2GHz at >1.425v. In other words, a 5% increase in frequency from 4 to 4.2GHz required a 22.6% increase in reported voltage.

Frequency in Ryzen 2 has started to behave like GPU Boost 3.0, where temperature, power consumption, and voltage heavily impact boosting behavior when left unmanaged. Our initial experience with Ryzen 2 led us to believe that a volt-frequency curve would look almost exponential, like the one on the screen now. That was our hypothesis. To be clear, we can push frequency higher with reference clock increases to 102 or 103MHz and can then sustain 4.2GHz at lower voltages, or even 4.25GHz and up, but that’s not our goal. Our goal is to plot a volt-frequency curve with just multiplier and voltage modifications. We typically run out of thermal headroom before we run out of safe voltage headroom, but if voltage increases exponentially, that will quickly become a problem.

There’s a new trend in the industry: Heatsinks. Hopefully, anyway.

Gigabyte has listened to our never-ending complaints about VRM heatsinks and VRM thermals, and outfitted their X470 Gaming 7 motherboard with a full, proper fin stack and heatpipe. We’re happy to see it, and we hope that this trend continues, but it’s also not entirely necessary on this board. That doesn’t make us less excited to see an actual heatsink on a motherboard; however, we believe it does potentially point toward a future in higher core-count Ryzen CPUs. This is something that Buildzoid speculated in our recent Gaming 7 X470 VRM & PCB analysis. The amount of “overkill” power delivery capabilities on high-end X470 boards would suggest plans to support higher power consumption components from AMD.

Take the Gigabyte Gaming 7: It’s a 10+2-phase VRM, with the VCore VRM using IR3553s for 40A power stages. That alone is enough to run passive, but a heatsink drags temperature so far below requirements of operating spec that there’s room to spare. Cooler is always better in this instance (insofar as ambient cooling, anyway), so we can’t complain, but we can speculate about why it’s been done this way. ASUS’ Crosshair VII Hero has the same VRM, but with 60A power stages. That board, like Gigabyte’s, could run with no heatsink and be fine.

We tested with thermocouples placed on one top-side MOSFET, located adjacent to the SOC VRM MOSFETs (1.2V SOC), and one left-side MOSFET that’s centrally positioned. Our testing included stock and overclocked testing (4.2GHz/1.41VCore at Extreme LLC), then further tested with the heatsink removed entirely. By design, this test had no active airflow over the VRM components. Ambient was controlled during the test and was logged every second.

Real-Time Ray Tracing Explained

By Published April 06, 2018 at 3:54 pm

Recent advancements in graphics processing technology have permitted software and hardware vendors to collaborate on real-time ray tracing, a long-standing “holy grail” of computer graphics. Ray-tracing has been used for a couple of decades now, but has always been used in pre-rendered graphics – often in movies or other video playback that doesn’t require on-the-fly processing. The difference with going real-time is that we’re dealing with sparse data, and making fewer rays look good (better than standard rasterization, especially) is difficult.

NVidia has been beating this drum for a few years now. We covered nVidia’s ray-tracing keynote at ECGC a few years ago, when the company’s Tony Tamasi projected 2015 as the year for real-time ray-tracing. That obviously didn’t fully realize, but the company wasn’t too far off. Volta ended up providing some additional leverage to make 60FPS, real-time ray-tracing a reality. Even still, we’re not quite there with consumer hardware. Epic Games and nVidia have been demonstrating real-time ray-tracing rendering with four Titan V100 GPUs lately, functionally $12,000 worth of Titan Vs, and that’s to achieve a playable real-time framerate with the ubiquitous “Star Wars” demo.

As we remarked back when we reviewed the i5-8400, launched on its lonesome and without low-end motherboard support, the Intel i5-8400 makes most sense when paired with B360 or H370 motherboards. Intel launched the i5-8400 and other non-K CPUs without that low-end chipset support, though, leaving only the Z370 enthusiast board on the frontlines with the locked CPUs.

When it comes to Intel chipset differences, the main point of comparison between B, H, and Z chipsets would be HSIO lanes – or high-speed I/O lanes. HSIO lanes are Intel-assigned per chipset, with each chipset receiving a different count of HSIO lanes. High-speed IO lanes can be assigned somewhat freely by the motherboard manufacturer, and are isolated from the graphics PCIe lanes that each CPU independently possesses. The HSIO lanes are as detailed below for the new 8th Generation Coffee Lake chipsets:

Page 1 of 19

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge