We’re at PAX West 2018 for just one day this year (primarily for a discussion panel), but stopped by the Gigabyte booth for a hands-on with the new RTX cards. As with most other manufacturers, these cards aren’t 100% finalized yet, although they do have some near-final cooling designs. The models shown today appear to use the reference PCB design with hand-made elements to the coolers, as partners had limited time to prepare. Gigabyte expects to have custom PCB solutions at a later date.
“How frequently should I replace liquid metal?” is one of the most common questions we get. Liquid metal is applied between the CPU die and IHS to improve thermal conductivity from the silicon, but there hasn’t been much long-term testing on liquid metal endurance versus age. Cracking and drying are some of the most common concerns, leading users to wonder whether liquid metal performance will fall off a cliff at some point. One of our test benches has been running thermal endurance cycling tests for the last year now, since September of 2017, just to see if it’s aged at all.
This is a case study. We are testing with a sample size of one, so consider it an experiment and case study over an all-encompassing test. It is difficult to conduct long-term endurance tests with multiple samples, and would require dozens (or more) of identical systems to really build-out a large database. From that angle, again, please keep in mind that this is a case study of one test bench, with one brand of liquid metal.
The “correct” method for applying thermal paste is still the subject of arguments, despite plenty of articles with testing and hard numbers to back them up. As we mentioned in our Threadripper paste application comparison, the general consensus for smaller desktop CPUs is that, as long as enough paste is applied to cover the IHS, every method is basically the same. The “blob” method works just fine. We have formally tested this for Threadripper (which cares about IHS coverage greatly) and X99 CPUs, but not for smaller desktop SKUs. Today debuts our formal thermal paste quantity testing -- not just method of application, but amount -- and looks specifically at the more common desktop CPUs. We are finally addressing the YouTube-wide comment of “too much” or “too little” paste, likely so prevalent as a result of everyone’s personal exposure to this one specific aspect of PC building.
Again, this isn’t really about whether an “X” or “dot” or thin spread is better (and none is superior, assuming all cover the IHS equally -- it’s just about how easily they achieve that goal). This is about how much quantity matters. See below example:
For today, we’re talking about volt-frequency scalability on our 8086K one more time. This time, coverage includes manual binning of our core, as we already illustrated limitations of the IMC in the overclocking stream. We’ve also already tested the CPU for thermal and acoustic performance when considering liquid metal applications.
The Intel i7-8086K is a binned i7-8700K, so we thought we’d see what bin we got. This testing exhibits simple volt-frequency curves as plotted against Blender and Firestrike stability testing. Note that our stability tests were limited to 30 minutes in an intensive Blender workload. Realistically, this is the most achievable for publication purposes, and 99% of CPUs that pass this test will remain stable. If we were selling these CPUs, maybe like Silicon Lottery, it’d obviously be preferable to test for many hours.
Frequency is the most advertised spec of RAM. As anyone who’s dug a little deeper knows, memory performance depends on timings as well--and not just the primary ones. We found this out the hard way while doing comparative testing for an article on extremely high frequency memory which refused to stabilize. We shelved that article indefinitely, but due to reader interest (thanks, John), we decided to explore memory subtimings in greater depth.
This content hopes to define memory timings and demystify the primary timings, including CAS (CL), tRAS, tRP, tRAS, and tRCD. As we define primary memory timings, we’ll also demonstrate how some memory ratios work (and how they sometimes can operate out of ratio), and how much tertiary and secondary timings (like tRFC) can impact performance. Our goal is to revisit this topic with a secondary and tertiary timings deep-dive, similar to this one.
We got information and advice from several memory and motherboard manufacturers in the course of our research, and we were warned multiple times about the difficulty of tackling this subject. On the one hand, it’s easy to get lost in minutiae, and on the other it’s easy to summarize things incorrectly. As ASUS told us, “you need to take your time on this one.” This is a general introduction, to be followed by another article with more detail on secondary and tertiary timings.
One of our Computex advertisers was CableMod, who are making a new vertical GPU mount that positions the video card farther back in the case, theoretically lowering thermals. We wanted to test this claim properly. It makes logical sense that a card positioned farther from the glass would operate cooler, but we wanted to test to what degree that’s true. Most vertical GPU mounts do fine for open loop cooling, but suffocate air-cooled cards by limiting the gap between the glass to less than an inch or two. The CableMod mount should push cards close to the motherboard, which has other interesting thermal characteristics that we’ll get into today.
We saw several cases at Computex that aim to move to rotating PCIe expansion slots, meaning that some future cases will accommodate GPUs positioned further toward the motherboard. Not all cases are doing this, leaving room for CableMod to compete, but it looks like Thermaltake and Cooler Master are moving this direction.
Manufacturing a single case can cost hundreds of thousands of dollars to design and develop, but the machinery used to make those cases costs millions of dollars. In a recent tour of Lian Li’s case manufacturing facility in Taiwan, we got to see first-hand the advanced and largely autonomous hydraulic presses, laser cutters, automatic shaping machines, and other equipment used to make a case. Some of these tools apply hundreds of thousands of pounds of force to case paneling, upwards of 1 million Newtons, and others will use high voltage to spot-weld pieces to aluminum paneling. Today, we’re walking through the start-to-finish process of how a case is made.
The first steps of case manufacturing at the Lian Li facility is to design the product. Once this process is done, CAD files go to Lian Li’s factory across the street to be turned into a case. In a simplified, canonical view of the manufacturing process, the first step is design, then raw materials and preparation of raw materials, followed by either a laser cutter for basic shapes or a press for tooled punch-outs, then washing, grinding, flattening, welding, and anodizing.
After seeing dozens of cases at Computex 2018, we’ve now rounded-up what we think are the best cases from the show, with the most interesting design elements, price points, or innovations. As always, wait until we can review these cases before getting too hyped and pre-ordering, but we wanted to at least point-out the top cases to pay attention to for the next year.
We’re calling this content the “Most Room for Improvement at Computex 2018” content piece. A lot of products this year are still prototypes, and so still have lots of time to improve and change. Many of the manufacturers have asked for feedback from media and will be making changes prior to launch, hopefully, but we wanted to share some of our hopes for improvement with all of you.
Separately, Linus of LinusTechTips joined us for the intro of this video, if that is of interest.
With B350, B360, Z370, Z390, X370, and Z490, we think it’s time to revisit an old topic answering what a chipset is. This is primarily to establish a point of why we need clarity on what each of these provides – there are a lot of chipsets with similar names, different socket types, and similar features. We’re here to define a chipset today in TLDR fashion, with a later piece to explain the actual chipset differences.
As for what a chipset actually is, this calls back to a GN article from 2012 – though we can do a better job now. The modern chipset is a glorified I/O controller, and can be thought of as the spinal cord of the computer, while the CPU is the disembodied brain. Intel calls its chipset a PCH, or Platform Controller Hub, while AMD just goes with the generic and appropriate term “chipset.” The chipset is the center of I/O for the rest of the motherboard, assigning I/O lanes to devices like SATA, gigabit ethernet, and USB ports.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.