We're working on finalizing our validation of the EVGA VRM concerns that arose recently, addressed by the company with the introduction of a new VBIOS and optional thermal pad solution. We tested each of these updates in our previous content piece, showing a marked improvement from the more aggressive fan speed curve.

Now, that stated, we still wanted to dig deeper. Our initial testing did apply one thermocouple to the VRM area of the video card, but we weren't satisfied with the application of that probe. It was enough to validate our imaging results, which were built around validating Tom's Hardware DE's results, but we needed to isolate a few variables to learn more about EVGA's VRM.

We toured Corsair's new offices about a year ago, where we briefly posted about some of the validation facilities and the then-new logo. Now, with the offices fully populated, we're revisiting to talk wind tunnels, thermal chambers, and test vehicles for CPU coolers and fans. Corsair Thermal Engineer Bobby Kinstle walks us through the test processes for determining on-box specs, explaining hundreds of thousands of dollars worth of validation equipment along the way.

This relates to some of our previous content, where we got time with a local thermal chamber to validate our own methodology. You might also be interested to learn about when and why we use delta values for cooler efficacy measurements, and why we sometimes go with straight diode temperatures (like thermal limits on GPUs).

Video here (posting remotely -- can't embed): https://www.youtube.com/watch?v=Mf1uI2-I05o

We're still on the road – but it's almost over. For now. Last “Ask GN” update, we were posting from the Orange County / LA area for some hardware vendor visits that we'd done. This episode, despite being filmed at the usual set, we're posting from the San Jose area. It worked out to be: LA > Home > LA (CitizenCon) > San Jose, all in a span of about 3 weeks.

But we're here for another day before returning to hardware reviews. For this episode, we discuss the question of using a FreeSync display with a higher-end nVidia card versus a lower performing AMD card, VRM blower fans and if they do anything, the 6700K vs. 6600K, and revisiting old GPUs. The last question is one that we've already begun working on.

“Ye-- ye cain't take pictures h-- here,” a Porky Pig-like voice meekly spoke up from behind the acrylic windshield of a golf cart that'd rolled up behind us, “y-ye cain't be takin' pictures! I'm bein' nice right now!”

Most folks in media production, YouTube or otherwise, have probably run into this. We do regularly. We wanted to shoot an Ask GN episode while in California, and decided to opt for one of the fountains in Fountain Valley as the backdrop. That's not allowed, apparently, because that's just how rare water is in the region – don't look at it the wrong way. It might evaporate. Or something.

But no big deal – we grab the bags and march off wordlessly, as always, because this sort of thing just happens that frequently while on the road.

Regardless, because Andrew was not imprisoned for sneaking a shot of the fountain into our video or taking two pretzel snacks on the plane, Ask GN 29 has now been published to the web. The questions from viewers and readers this week include a focus on “why reviewers re-use GPU benchmark results” (we don't – explained in the video), the scalers in monitors and what “handles stretching” for resolutions, pump lifespan and optimal voltage for AIOs, and theoretical impact from HBM on IGPs.

This year has been the most travel-intensive year in the history of GN. We've made a few international trips for the company – Taiwan, China, Hong Kong, Macau, and London, mostly – and have had a merciless bombardment of domestic flights for coverage opportunities. One of those was PAX West, now behind us, and the next will bring us back to California for some company tours. We haven't done a full-on tour of major manufacturers on the west coast since about 2013, and the site has undergone major skill improvements (from both of our video staffers, especially) and equipment improvements.

The next steps for GN will be to push through another week of GPU and laptop coverage. That'll include the GE62VR, a final (for now) round of liquid cooled GPU reviews, and some special coverage that will soon be posted. Once that's past, we're taking a step back to cases and cooling, including coverage of Phononic's HEX 2.0 cooler, and then making plans for the next trip.

We've got a new thermal paste applicator tool that'll help ensure consistent, equal spread of TIM across cooler surfaces for future tests. As we continue to iterate on "Hybrid" DIY builds, or even just re-use coolers for testing, we're also working to control for all reasonable variables in the test process. Our active ambient monitoring with thermocouple readers was the first step of that, and ensures that even minute (resolution 0.1C) fluctuations in ambient are accounted for in the results. Today, we're adding a new tool to the arsenal. This is a production tool used in Asetek's factory, and is deployed to apply that perfect circle of TIM that comes pre-applied to all the liquid cooler coldplates. By using the same application method on our end (rather than a tube of compound), we eliminate the chance of users changing application methods and eliminate the chance of applying too much or too little compound. These tools ensure exactly the same TIM spread each time, and mean that we can further eliminate variables in testing. That's especially important for regression testing.

This isn't something you use for home use, it is for production and test use. When cooling manufacturers often fight over half a degree of temperature advantage, it would be unfair to the products to not account for TIM application, which could easily create a 0.5C temperature swing. For consumers, that's irrelevant -- but we're showing a stack of products in direct head-to-head comparisons, and that needs to be an accurate stack.

Following several YouTube commenter questions on some of our testing methodology and presentation, we decided to put together a short guide to FPS and temperature measurements. Specifically speaking to framerate and frametime testing, we've spent a few years refining our collection and presentation of “1% low” and “0.1% low” framerates, which is a converted presentation of frame output over time. These help us look into instances where a product might produce high averages, but exhibit stuttering and jarring gameplay that negatively impacts the “fluidity” of the experience.

An example, as we note in the “What are 1% & 0.1% lows?” video, would be the G3258 in GTA V versus something like the X4 760K. In this particular case, we saw the G3258 sustaining a higher average FPS than the X4 760K, but it was actually a worse product for the setup – that's because of the low values. The G3258 was getting hammered by such big, sudden dips in performance (a result of its limited thread count) that the 0.1% low output would sometimes hit 4FPS. In the real world, this means that players see stuttering and jarring gameplay.

Here's the video explaining all of this in greater depth:

We welcomed AMD's Scott Wasson on-camera at the company's Capsaicin event, where we also spoke to Roy Taylor about driver criticism and covered roadmap updates. Wasson was eager to discuss new display technology demonstrated at the event and highlighted a critical shift toward greater color depth and vibrancy. We saw early samples of HDR screens at CES, but the Capsaicin display was far more advanced.

But that's not all we spoke about. As a site which prides itself on testing frame delivery consistency (we call them “low frametimes” – 1% and 0.1% lows), it made perfect sense to speak with frametime testing pioneer Scott Wasson about the importance of this metric.

For the few unaware, Wasson founded the Tech Report and worked as the site's Editor-in-Chief up until January, at which time he departed as EIC and made a move to AMD. Wasson helped pioneer “frametime testing,” detailed in his “Inside the Second” article, and we'd strongly recommend a read.

Thermal testing for cases, coolers, CPUs, and GPUs requires very careful attention to methodology and test execution. Without proper controls for ambient or other variables within a lab/room environment, it's exceedingly easy for tests to vary to a degree that effectively invalidates the results. Cases and coolers are often fighting over one degree (Celsius) or less of separation, so having strict tolerances for ambient and active measurements of diodes and air at intake/exhaust helps ensure accurate data.

We recently put our methodology to the test by borrowing time on a local thermal chamber – a controlled environment – and checking our delta measurements against it. GN's thermal testing is conducted in a lab on an open-loop HVAC system; we measure ambient constantly (second-to-second) with thermocouples, then subtract those readings from diode readings to create a delta value. For the thermal chamber, we performed identical methodology within a more tightly controlled environment. The goal was to determine if the delta value (within the chamber) paralleled the delta value achieved in our own (open air) labs, within reasonable margin of error; if so, we'd know our testing is fully accurate and properly accounts for ambient and other variables.

The chamber used has climate control functions that include temperature settings. We set the chamber to match our normal lab temps (20C), then checked carefully for where the intake and exhaust are setup within the chamber. This particular unit has slow, steady intake from the top that helps circulate air by pushing it down to an exhaust vent at the bottom. It'd just turn into an oven, otherwise, as the system's rising temps would increase ambient. This still happens to some degree, but a control module on the thermal chamber helps adjust and regulate the 20C target as the internal temperature demands. It's the control module which is the most expensive, too; our chaperone told us that the units cost upwards of $10,000 – and that's for a 'budget-friendly' approach.

 

SSD benchmarks generally include two fundamental file I/O tests: Sequential and 4K random R/W. At a very top-level, sequential tests consist of large, individual files transfers (think: media files), which is more indicative of media consumption and large file rendering / compilation. 4K random tests employ thousands of files approximating 4KB in size each, generally producing results that are more indicative of what a user might experience in a Windows or application-heavy environment.

file-io-study

Theoretically, this would also be the test to which gamers should pay the most attention. A "pure gaming" environment (not using professional work applications) will be almost entirely exposed to small, random I/O requests generated within the host OS, games, and core applications. A particularly piratical gamer -- or just someone consuming large movie and audio files with great regularity -- would also find use in monitoring sequential I/O in benchmarks.

This article looks at a few things: What types of I/O requests do games spawn most heavily and what will make for the best gaming SSDs with this in mind? There are a few caveats here that we'll go through in a moment -- namely exactly how "noticeable" various SSDs will be in games when it comes to performance. We used tracing software to analyze input / output operations while playing five recent AAA titles and ended up with surprisingly varying results.

UPDATE: Clarified several instances of "file" vs. "I/O" usage.

Page 2 of 2

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge