Hardware Guides

To use any processing product for six years is a remarkable feat. GPUs struggle to hang on for that amount of time. You’d be reducing graphics settings heavily after the second or third year, and likely considering an upgrade around the same time. Intel’s CPUs are different – they don’t change much, and we almost always recommend skipping at least one generation between upgrades (for the gaming audience, anyway). The 7700K increased temperatures substantially and didn’t increase performance in-step, making it a pointless upgrade for any owners of the i7-6700K or i7-4690K.

We did remark in the review that owners of the 2500K and 2600K may want to consider finally moving up to Kaby Lake, but if we think about that for a second, it almost seems ridiculous: Sandy Bridge is an architecture from 2011. The i5-2500K came out in 1Q11, making it about six years old as of 2017. That is some serious staying power. Intel shows gains less than 10% generationally with almost absolute certainty. We see double-digits jumps in Blender performance and some production workloads, but that is still not an occurrence with every architecture launch. With gaming, based on the 6700K to 7700K jump, you’re lucky to get more than 1.5-3% extra performance. That’s counting frametime performance, too.

AMD’s architectural jumps should be a different story, in theory, but that’s mostly because Zen is planted 5 years after the launch of the FX-8350. AMD did have subsequent clock-rate increases and various rebadges or power efficiency improvements (FX-8370, FX 8320E), but those weren’t really that exciting for existing owners of 8000-series CPUs. In that regard, it’s the same story as Intel. AMD’s Ryzen will certainly post large gains over AMD’s last architecture given the sizeable temporal gap between launches, but we still have no idea how the next iteration will scale. It could well be just as slow as Intel’s scaling, depending on what architectural and process walls AMD may run into.

That’s not really the point of this article, though; today, we’re looking at whether it’s finally time to upgrade the i5-2500K CPU. Owners of the i5-2500K did well to buy one, it turns out, because the only major desire to upgrade would likely stem from a want of more I/O options (like M.2, NVMe, and USB3.1 Gen2 support). Hard performance is finally becoming a reason to upgrade, as we’ll show, but we’d still rank changes to HSIO as the biggest driver in upgrade demand. In the time since 2011, PCIe Gen3 has proliferated across all relevant platforms, USB3.x ports have increased to double-digits on some boards, M.2 and NVMe have entered the field of SSDs, and SATA III is on its way out as a storage interface.

After receiving a number of emails asking how to flash motherboard BIOS, we decided to revive an old series of ours and revisit each motherboard vendor’s flashing process as quickly as possible. This is particularly useful for users residing on the Z170 platform who may want to flash to support Kaby Lake CPUs. The process is the same for all modern MSI motherboards, and will work across all SKUs (with some caveats and disclaimers).

This tutorial shows how to flash firmware and update BIOS for MSI motherboards, including the new Z270 Pro Carbon / Tomahawk boards and ‘old’ Gaming M7 Z170 motherboards. For this guide, we’re primarily showing the MSI Z270 Gaming Pro Carbon, but we do briefly have some shots of the Tomahawk Z270 board. This guide applies retroactively to Z170 motherboards, and even most Z97 motherboards.

Article continues below the video, if written format is preferred.

Best Gaming PC Cases of 2017 | CES Round-Up

By Published January 13, 2017 at 4:00 pm

This year’s case manufacturers will primarily be focused on shifting to USB Type-C – you heard it here first – as the upcoming trend for case design. Last year, it was a craze to adopt tempered glass and RGB LEDs, and that’s plainly not stopped with this year’s CES. That trend will carry through the half of 2017, and will likely give way to Type-C-heavy cases at Computex in May-June.

For today, we’re looking at the best PC cases of 2017 thus far, as shown at the annual Consumer Electronics Show (CES). Our case round-ups are run every year and help to determine upcoming trends in the PC cases arena. This year’s collection of the top computer cases (from $60 to $2000) covers the major budget ranges for PC building.

AMD’s Ryzen platform is on its march to the launch window – likely February of 2017 – and will be pushing non-stop information until its time of delivery. For today, we’re looking at the CPU and chipset architectures in greater depth, following-up on yesterday’s motherboard reveal.

First, let’s clear-up nomenclature confusion: “Zen” is still the AMD next generation CPU architecture name. “Ryzen” is the family of CPUs, comparable to Intel’s “Core” family in some loose ways. Each Ryzen CPU will exist on the Zen architecture, and each Ryzen CPU will have its own individual alphanumeric identifier (just like always).

Optane is Intel’s latest memory technology. The long-term goal for Optane is for it to be used as a supplemental system memory, caching storage, and primary storage inside PCs. Intel claims that Optane is faster than Flash NAND, only slightly slower than DRAM, has higher endurance than NAND, and, due to its density, will be about half the cost of DRAM. The catch with all of these claims is that Intel has yet to release any concrete data on the product.

What we do know is that Lenovo announced that they will be using a 16GB M.2 Optane drive for caching in a couple of their new laptops during Q1 2017. Intel also announced that another 32GB caching drive should be available later in the year, something we’ve been looking into following CES 2017. This article will look into what Intel Optane actually is, how we think it works, and whether it's actually a viable device for the enthusiast market.

AMD’s Vega GPU architecture has received cursory details pertaining to high-bandwidth caching, an iterative step to CUs (NCUs), and a unified-but-not-unified memory configuration.

Going into this, note that we’re still not 100% briefed on Vega. We’ve worked with AMD to try and better understand the architecture, but the details aren’t fully organized for press just yet; we’re also not privy to product details at this time, which would be those more closely associated with shader counts, memory capacity, and individual SKUs. Instead, we have some high-level architecture discussion. It’s enough for a start.

We recently prolonged the life of GN Andrew’s Lenovo laptop, a task accomplished by tearing the thing down and cleaning out the dust, then re-applying thermal compound. This brought temperatures down well below 80C on the silicon components, where the unit was previously reaching 100C (or TjMax values and thereby throttling). The laptop has lived to work many more long render sessions since that time, and has been in good shape since.

That’s gotten us a bit of a reputation, it seems, as we just recently spent a few hours fixing a Dell Studio XPS 1640 and its noise issues.

The 1640 had a few problems at its core: The first, loud noise during idle (desktop); the second, slowing boot times with age; and the third, less-than-snappy responsiveness upon launching applications.

Subscribers of our YouTube channel will know that we’ve been hastily assembling a gaming HTPC for the last few days, dedicated as a gift for Andie (my sister, and also occasional tester for the site). We started on the 21st, with limited time to order any missing parts, and finished just today (24th). The goal was to replace her current HTPC, catalogued many years ago on GN, which uses an A10-5800K, upgraded MSI GTX 960 Gaming X, and is struggling to operate high framerates.

No surprise.

The A10-5800K was an excellent CPU for the original build (which had no GPU, and later added a 750 Ti), but it’s not so powerful 4 years later. We wanted to pull parts for this build that could be readily found in GN’s lab, without shipping requirements (where avoidable), and without pulling parts that are in active or regression testing use.

Ramping up the video production for 2016 led to some obvious problems – namely, burning through tons of storage. We’ve fully consumed 4TB of video storage this year with what we’re producing, and although that might be a small amount to large video enterprises, it is not insignificant for our operation. We needed a way to handle that data without potentially losing anything that could be important later, and ultimately decided to write a custom Powershell script for automated Handbrake CLI compression routines that execute monthly.

Well, will execute monthly. For now, it’s still catching up and is crunching away on 4000+ video files for 2016.

Thermal cameras have proliferated to a point that people are buying them for use as tech toys, made possible thanks to new prices nearer $200 than the multi-thousand thermal imaging cameras that have long been the norm. Using a thermal camera that connects to a mobile phone eliminates a lot of the cost for such a device, relying on the mobile device’s hardware for post-processing and image cleanup that make the cameras semi-useful. They’re not the most accurate and should never be trusted over a dedicated, proper thermal imaging device, but they’re accurate enough for spot-checking and rapid concepting of testing procedures.

Unfortunately, we’ve seen them used lately as hard data for thermal performance of PC hardware. For all kinds of reasons, this needs to be done with caution. We urged in our EVGA VRM coverage that thermal imaging was not perfect for the task, and later stuck thermal probes directly to the card for more accurate measurements. Even ignoring the factors of emission, transmission, and reflection (today’s topics), using thermal imaging to take temperature measurements of core component temperatures is methodologically flawed. Measuring the case temperature of a laptop or chassis tells us nothing more than that – the temperature of the surface materials, assuming an ideal black body with an emissivity close to 1.0. We’ll talk about that contingency momentarily.

But even so: Pointing a thermal imager at a perfectly black surface and measuring its temperature is telling us the temperature of the surface. Sure, that’s useful for a few things; in laptops, that could be determining if case temperature exceeds the skin temp specification of a particular manufacturer. This is good for validating whether a device might be safe to touch, or for proving that a device is too hot for actual on-lap use. We could also use this information as troubleshooting to help us determine where hotspots are under the hood, potentially useful in very specific cases.

That doesn’t, however, tell us the efficacy of the cooling solution within the computer. For that, we need software to measure the CPU core temperatures, the GPU diode, and potentially other components (PCH and HDD/SSD are less popular, but occasionally important). Further analysis would require direct thermocouple probes mounted to the SMDs of interest, like VRM components or VRAM. Neither of these two examples are equipped with internal sensors that software, and even the host GPU, is capable of reading.

Page 1 of 13

  VigLink badge