The “correct” method for applying thermal paste is still the subject of arguments, despite plenty of articles with testing and hard numbers to back them up. As we mentioned in our Threadripper paste application comparison, the general consensus for smaller desktop CPUs is that, as long as enough paste is applied to cover the IHS, every method is basically the same. The “blob” method works just fine. We have formally tested this for Threadripper (which cares about IHS coverage greatly) and X99 CPUs, but not for smaller desktop SKUs. Today debuts our formal thermal paste quantity testing -- not just method of application, but amount -- and looks specifically at the more common desktop CPUs. We are finally addressing the YouTube-wide comment of “too much” or “too little” paste, likely so prevalent as a result of everyone’s personal exposure to this one specific aspect of PC building.
Again, this isn’t really about whether an “X” or “dot” or thin spread is better (and none is superior, assuming all cover the IHS equally -- it’s just about how easily they achieve that goal). This is about how much quantity matters. See below example:
For today, we’re talking about volt-frequency scalability on our 8086K one more time. This time, coverage includes manual binning of our core, as we already illustrated limitations of the IMC in the overclocking stream. We’ve also already tested the CPU for thermal and acoustic performance when considering liquid metal applications.
The Intel i7-8086K is a binned i7-8700K, so we thought we’d see what bin we got. This testing exhibits simple volt-frequency curves as plotted against Blender and Firestrike stability testing. Note that our stability tests were limited to 30 minutes in an intensive Blender workload. Realistically, this is the most achievable for publication purposes, and 99% of CPUs that pass this test will remain stable. If we were selling these CPUs, maybe like Silicon Lottery, it’d obviously be preferable to test for many hours.
Frequency is the most advertised spec of RAM. As anyone who’s dug a little deeper knows, memory performance depends on timings as well--and not just the primary ones. We found this out the hard way while doing comparative testing for an article on extremely high frequency memory which refused to stabilize. We shelved that article indefinitely, but due to reader interest (thanks, John), we decided to explore memory subtimings in greater depth.
This content hopes to define memory timings and demystify the primary timings, including CAS (CL), tRAS, tRP, tRAS, and tRCD. As we define primary memory timings, we’ll also demonstrate how some memory ratios work (and how they sometimes can operate out of ratio), and how much tertiary and secondary timings (like tRFC) can impact performance. Our goal is to revisit this topic with a secondary and tertiary timings deep-dive, similar to this one.
We got information and advice from several memory and motherboard manufacturers in the course of our research, and we were warned multiple times about the difficulty of tackling this subject. On the one hand, it’s easy to get lost in minutiae, and on the other it’s easy to summarize things incorrectly. As ASUS told us, “you need to take your time on this one.” This is a general introduction, to be followed by another article with more detail on secondary and tertiary timings.
One of our Computex advertisers was CableMod, who are making a new vertical GPU mount that positions the video card farther back in the case, theoretically lowering thermals. We wanted to test this claim properly. It makes logical sense that a card positioned farther from the glass would operate cooler, but we wanted to test to what degree that’s true. Most vertical GPU mounts do fine for open loop cooling, but suffocate air-cooled cards by limiting the gap between the glass to less than an inch or two. The CableMod mount should push cards close to the motherboard, which has other interesting thermal characteristics that we’ll get into today.
We saw several cases at Computex that aim to move to rotating PCIe expansion slots, meaning that some future cases will accommodate GPUs positioned further toward the motherboard. Not all cases are doing this, leaving room for CableMod to compete, but it looks like Thermaltake and Cooler Master are moving this direction.
Manufacturing a single case can cost hundreds of thousands of dollars to design and develop, but the machinery used to make those cases costs millions of dollars. In a recent tour of Lian Li’s case manufacturing facility in Taiwan, we got to see first-hand the advanced and largely autonomous hydraulic presses, laser cutters, automatic shaping machines, and other equipment used to make a case. Some of these tools apply hundreds of thousands of pounds of force to case paneling, upwards of 1 million Newtons, and others will use high voltage to spot-weld pieces to aluminum paneling. Today, we’re walking through the start-to-finish process of how a case is made.
The first steps of case manufacturing at the Lian Li facility is to design the product. Once this process is done, CAD files go to Lian Li’s factory across the street to be turned into a case. In a simplified, canonical view of the manufacturing process, the first step is design, then raw materials and preparation of raw materials, followed by either a laser cutter for basic shapes or a press for tooled punch-outs, then washing, grinding, flattening, welding, and anodizing.
After seeing dozens of cases at Computex 2018, we’ve now rounded-up what we think are the best cases from the show, with the most interesting design elements, price points, or innovations. As always, wait until we can review these cases before getting too hyped and pre-ordering, but we wanted to at least point-out the top cases to pay attention to for the next year.
We’re calling this content the “Most Room for Improvement at Computex 2018” content piece. A lot of products this year are still prototypes, and so still have lots of time to improve and change. Many of the manufacturers have asked for feedback from media and will be making changes prior to launch, hopefully, but we wanted to share some of our hopes for improvement with all of you.
Separately, Linus of LinusTechTips joined us for the intro of this video, if that is of interest.
With B350, B360, Z370, Z390, X370, and Z490, we think it’s time to revisit an old topic answering what a chipset is. This is primarily to establish a point of why we need clarity on what each of these provides – there are a lot of chipsets with similar names, different socket types, and similar features. We’re here to define a chipset today in TLDR fashion, with a later piece to explain the actual chipset differences.
As for what a chipset actually is, this calls back to a GN article from 2012 – though we can do a better job now. The modern chipset is a glorified I/O controller, and can be thought of as the spinal cord of the computer, while the CPU is the disembodied brain. Intel calls its chipset a PCH, or Platform Controller Hub, while AMD just goes with the generic and appropriate term “chipset.” The chipset is the center of I/O for the rest of the motherboard, assigning I/O lanes to devices like SATA, gigabit ethernet, and USB ports.
Our colleagues at Hardware Canucks got a whole lot of hate for their video about switching back to Intel, to the point that it really shows the profound ignorance of being a blind fanboy of any product. We decided to run more in-depth tests of the same featureset as Dmitry, primarily for selfish reasons, though, as we’ve also been considering a new render machine build. If HWC’s findings were true, our plans of using an old 6900K would be meaningless in the face of a much cheaper CPU with an IGP.
For this testing, we’re using 32GB of RAM for all configurations (dual-channel for Z/X platforms and quad-channel for X399/X299). We’re also using an EVGA GTX 1080 Ti FTW3 for CUDA acceleration – because rendering without CUDA is torturously slow and we’re testing for real-world conditions.
Adobe recently added IGP-enabled acceleration to its Premiere video editing and creation software, which seems to leverage a component that is often irrelevant in our line of work – the on-die graphics processor. This move could potentially invalidate the rendering leverage provided by the likes of a 7980XE or 1950X, saving money for anyone who doesn’t need the additional threads for other types of work (like synchronous rendering or non-Premiere workstation tasks, e.g. Blender). Today, we’re benchmarking Adobe Premiere’s rendering speed on an Intel i7-8700X, AMD R7 2700X, Intel i9-7980XE, and AMD Threadripper 1950X.
In case you find it boring to watch an IHS get sanded for ten minutes, we’ve written-up this recap of our newest video. The content features a lapped AMD Ryzen APU IHS for the R3 2200G, which we previously delidded and later topped with a custom copper Rockit Cool IHS. For this next thermal benchmark, we sanded down the AMD Ryzen APU IHS with 600 grit, 1200 grit, 1500 grit, 2000 grit, and then 3000 grit (wet) to smooth-out the IHS surface. After this, we used a polishing rag and compound to further buff the IHS (not shown in the video, because it is exceptionally boring to watch), then we cleaned it and ran the new heatspreader through our standardized thermal benchmark.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.