Subscribers of our YouTube channel will know that we’ve been hastily assembling a gaming HTPC for the last few days, dedicated as a gift for Andie (my sister, and also occasional tester for the site). We started on the 21st, with limited time to order any missing parts, and finished just today (24th). The goal was to replace her current HTPC, catalogued many years ago on GN, which uses an A10-5800K, upgraded MSI GTX 960 Gaming X, and is struggling to operate high framerates.
The A10-5800K was an excellent CPU for the original build (which had no GPU, and later added a 750 Ti), but it’s not so powerful 4 years later. We wanted to pull parts for this build that could be readily found in GN’s lab, without shipping requirements (where avoidable), and without pulling parts that are in active or regression testing use.
Ramping up the video production for 2016 led to some obvious problems – namely, burning through tons of storage. We’ve fully consumed 4TB of video storage this year with what we’re producing, and although that might be a small amount to large video enterprises, it is not insignificant for our operation. We needed a way to handle that data without potentially losing anything that could be important later, and ultimately decided to write a custom Powershell script for automated Handbrake CLI compression routines that execute monthly.
Well, will execute monthly. For now, it’s still catching up and is crunching away on 4000+ video files for 2016.
Thermal cameras have proliferated to a point that people are buying them for use as tech toys, made possible thanks to new prices nearer $200 than the multi-thousand thermal imaging cameras that have long been the norm. Using a thermal camera that connects to a mobile phone eliminates a lot of the cost for such a device, relying on the mobile device’s hardware for post-processing and image cleanup that make the cameras semi-useful. They’re not the most accurate and should never be trusted over a dedicated, proper thermal imaging device, but they’re accurate enough for spot-checking and rapid concepting of testing procedures.
Unfortunately, we’ve seen them used lately as hard data for thermal performance of PC hardware. For all kinds of reasons, this needs to be done with caution. We urged in our EVGA VRM coverage that thermal imaging was not perfect for the task, and later stuck thermal probes directly to the card for more accurate measurements. Even ignoring the factors of emission, transmission, and reflection (today’s topics), using thermal imaging to take temperature measurements of core component temperatures is methodologically flawed. Measuring the case temperature of a laptop or chassis tells us nothing more than that – the temperature of the surface materials, assuming an ideal black body with an emissivity close to 1.0. We’ll talk about that contingency momentarily.
But even so: Pointing a thermal imager at a perfectly black surface and measuring its temperature is telling us the temperature of the surface. Sure, that’s useful for a few things; in laptops, that could be determining if case temperature exceeds the skin temp specification of a particular manufacturer. This is good for validating whether a device might be safe to touch, or for proving that a device is too hot for actual on-lap use. We could also use this information as troubleshooting to help us determine where hotspots are under the hood, potentially useful in very specific cases.
That doesn’t, however, tell us the efficacy of the cooling solution within the computer. For that, we need software to measure the CPU core temperatures, the GPU diode, and potentially other components (PCH and HDD/SSD are less popular, but occasionally important). Further analysis would require direct thermocouple probes mounted to the SMDs of interest, like VRM components or VRAM. Neither of these two examples are equipped with internal sensors that software, and even the host GPU, is capable of reading.
The second card in our “revisit” series – sort of semi-re-reviews – is the GTX 780 Ti from November of 2013, which originally shipped for $700. This was the flagship of the Kepler architecture, followed later by Maxwell architecture on GTX 900 series GPUs, and then the modern Pascal. The 780 Ti was in competition with AMD’s R9 200 series and (a bit later) R9 300 series cards, and was accompanied by the expected 780, 770, and 760 video cards.
Our last revisit looked at the GTX 770 2GB card, and our next one plans to look at an AMD R9 200-series card. For today, we’re revisiting the GTX 780 Ti 3GB card for an analysis of its performance in 2016, as pitted against the modern GTX 1080, 1070, 1060, 1050 Ti, and RX 480, 470, and others.
GN reader ‘Eric’ reached-out to us to loan his Alphacool Eiswolf GPX Pro cooling block, which we’ve now applied to a GTX 1080 Founders Edition card. The Eiswolf build process isn’t too difficult – certainly easier than the tear-down of the average FE card. The Eiswolf GPX Pro has an on-card pump with designated in/out tubes, each terminating in threaded quick release valves that hook into a semi-open loop system. We later purchased an Alphacool Eisbaer for our radiator and CPU cooler, then connected them all together.
The review of the Eiswolf will be posted tomorrow, followed shortly by a look at EK WB’s Predator XLC. For today, we’re just posting the build log that our Patreon backers have helped produce.
Our full OCAT content piece is still pending publication, as we ran into some blocking issues when working with AMD’s OCAT benchmarking utility. In speaking with the AMD team, those are being worked-out behind the scenes for this pre-release software, and are still being actively documented. For now, we decided to push a quick overview of OCAT, what it does, and how the tool will theoretically make it easier for all users to perform Dx12 & Vulkan benchmarks going forward. We’ll revisit with a performance and overhead analysis once the tool works out some of its bugs.
The basics, then: AMD has only built the interface and overlay here, and uses the existing, open source Intel+Microsoft amalgam of PresentMon to perform the hooking and performance interception. We’ve already been detailing PresentMon in our benchmarking methods for a few months now, using PresentMon monitoring low-level API performance and using Python and Perl scripts built by GN for data analysis. That’s the thing, though – PresentMon isn’t necessarily easy to understand, and our model of usage revolves entirely around command line. We’re using the preset commands established by the tool’s developers, then crunching data with spreadsheets and scripts. That’s not user-friendly for a casual audience.
Just to deploy the tool, Visual Studio package requirements and a rudimentary understanding of CMD – while not hard to figure out – mean that it’s not exactly fit to offer easy benchmarking for users. And even for technical media, an out-of-box PresentMon isn’t exactly the fastest tool to work with.
There’s inherent FPS loss when using capture software, GPU-accelerated or otherwise. The best that software vendors can do is try to reduce loss as much as possible, but ideally without sacrificing too much video quality or too much compression capability.
A few months back, AMD finally axed its partnership with Raptr for the cumbersome Gaming Evolved suite. This move to greener – or ‘redder,’ perhaps – pastures immediately left AMD with a hole in its tools suite, namely a competitor to nVidia’s somewhat prolific ShadowPlay software capture tool.
Today, with the AMD ReLive update to the Crimson-brand drivers, AMD’s implemented its own solution to software capture for gameplay. The tool includes manually toggled capture, broadcast/streaming capture, and retroactive capture. This is a direct competitor to the ShadowPlay software from nVidia’s GeForce Experience suite, and performs many of the same functions with the same end objective.
We previously did this comparison with ShadowPlay versus FRAPS and AMD’s GVR, a solution that ultimately was subsumed by Gaming Evolved. It’s taken AMD a while to get back to this point, but ReLive is a fresh recording suite. In GN’s embedded video, we’ve got side-by-side capture comparisons between the two utilities, the impact on framerate when each is active, and a quick analysis of the compression’s efficacy. Much of this will also be contained below, though the quality comparison will require you view the video.
There are two ends to a power supply cable: The device-side and the PSU-side. The device-side of all PC cables is standardized. ATX 24-pin, EPS12V, PCI-e to the GPU, SATA—the wiring is known, and it doesn't change. What isn't standardized, however, is the layout of the PSU-side modular cable headers. Some vendors might use 6-pin connectors for their PSU-side peripheral headers (identical to what's found on PCI-e cables, because it saves cost), others will opt instead for a wide-format pin-out for the same. Another still could use a bulky 9-pin block for universal connectivity, like some of EVGA's power supplies.
What can't be done, though, is mixing cables between all these units. Or at least, it shouldn't be done. Mixing cables between power supplies can kill them or kill attached components. Not always, but it can -- and when the wiring crosses in exactly the wrong way, the failure will be spectacular. Like ESD, just because you've gotten away with mixing cables doesn't mean you always will. Electricity is not a mystery; we know well how it works, and crossing the wrong wires will damage components.
Following suit with the rest of our Black Friday coverage, including Best SSDs and power supplies, we’ve next rounded-up a few honorable mentions in the motherboard department. We're specifically looking at Intel boards today, as deals on AMD boards seemed a bit scarce this year. With the looming obsolescence of the AM3/AM3+ socket, we elected to not include those boards. You’ll notice that, save for sharing a common thread in socket type (all supporting Intel’s latest Skylake processors), these picks vary quite a bit. Be assured though, these boards all have a place. Whether it’s a minimalist, no-frills gaming machine for medium to high settings or a high-end, performance-minded overclocker, there’s a board here for it.
This list comprises the best gaming Intel motherboards for Cyber Monday (and onward), including Z170, B150, H110, and other motherboards.
The Z170 boards in this list are of proven quality, and do come recommended; however, it is worth mentioning that Z170 is not tantamount to "better." A poorly designed Z170 board is not inherently superior than a well-constructed B150 or H1xx, even at a comparable price. There's more to it than the chipset. If you are curious as to what the differences are between Intel's Skylake chipsets, view this H110, H170, & Z170 guide.
Some PC parts garner a lot more attention than others: CPUs, GPUs, and SSDs have clear, exciting advancements and benefits that can be directly felt by the user. Some components, like PSUs, don’t get the same amount of coverage or excitement.
Nonetheless, power supplies are a vital part of a PC and a good PSU choice can last throughout multiple PCs, whereas a bad PSU choice could lead to strange issues and can even break other components. In anticipation of the holiday season coming up, we’ve once again compiled a list of ranked PSUs at different price points.
This is GN’s list of best power supplies for gaming PCs in 2016, ranging $45 to $300. Note that some of these power supplies will be on sale during Black Friday and Cyber Monday, so keep an eye on anything that looks appealing for your PC build.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.