Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.
This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.
Here's part 1:
The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?
In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.
While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.
While we work on our R7 1700 review, we’ve also been tearing down the remainder of the new Nintendo Switch console ($300). The first part of our tear-down series featured the Switch itself – a tablet, basically, that is somewhat familiar to a Shield – and showed the Tegra X1 modified SOC, what we think is 4GB of RAM, and a Samsung eMMC module. Today, we’re tearing down the Switch right Joycon (with the IR sensor) and docking station, hoping to see what’s going on under the hood of two parts largely undocumented by Nintendo.
The Nintendo Switch dock sells for $90 from Nintendo directly, and so you’d hope it’s a little more complex than a simple docking station. The article carries on after the embedded video:
Ryzen, Vega, and 1080 Ti news has flanked another major launch in the hardware world, though this one is outside of the PC space: Nintendo’s Switch, formerly known as the “NX.”
We purchased a Nintendo Switch ($300) specifically for teardown, hoping to document the process for any future users wishing to exercise their right to repair. Thermal compound replacement, as we learned from this teardown, is actually not too difficult. We work with small form factor boxes all the time, normally laptops, and replace compound every few years on our personal machines. There have certainly been consoles in the past that benefited from eventual thermal compound replacements, so perhaps this teardown will help in the event someone’s Switch encounters a similar scenario.
Not long ago, we opened discussion about AMD’s new OCAT tool, a software overhaul of PresentMon that we had beta tested for AMD pre-launch. In the interim, and for the past five or so months, we’ve also been silently testing a new version of FCAT that adds functionality for VR benchmarking. This benchmark suite tackles the significant challenges of intercepting VR performance data, further offering new means of analyzing warp misses and drop frames. Finally, after several months of testing, we can talk about the new FCAT VR hardware and software capture utilities.
This tool functions in two pieces: Software and hardware capture.
Revisiting an article from GN days of yore, GamersNexus endeavored to explain the differences between Western Digital’s WD Blue, Black, Red, and Purple hard drives. In this content, we also explain the specs and differences between WD Green vs. Blue & Black SSDs. In recent years, Western Digital’s product stack as changed considerably, as has the HDD market in general. We’ve found it fitting to resurrect this WD Blue, Black, Green, Red, and Purple drive naming scheme explanation. We’ll talk about the best drives for each purpose (e.g. WD Blue vs. Black for gaming), then dig into the new SSDs.
Unchanged over the years is Western Digital’s affinity for deferring to colors as to identify products, where other HDD vendors prefer fantastic creature names (BarraCuda, IronWolf, SkyHawk, etc.). As stated above, Western Digital has seriously changed its lineup. The WD Green drives have been painted blue, as they’ve been folded into the WD Blue umbrella. Furthermore, the WD Blue brand has seen the addition of an SSHD offering and SSDs in both 2.5” and M.2 form factors. This in no small part thanks to Western Digital’s acquisition of SanDisk—another notable development since our last article. With that, the WD Blue brand has expanded to become Western Digital’s most comprehensive mainstream product line-up.
Other changes to the Western Digital rainbow include the expanding of WD Black, and confusingly enough, WD Green brands. Starting with the latter, Western Digital rebranded all WD Green HDDs as WD Blue, selling WD Blues under two different RPMs, but recently reentered the SSD market with both. However, the WD Green SSDs are currently unavailable, perhaps due to the global NAND shortage. Likewise, the WD Black series has spilled over into the realm of NVMe/PCIe based storage and WD Black HDDs have expanded capacities up to 6TB; that’s quite a change from the 4TB flagship model we covered back in 2014. Lastly, there is WD Purple, of which we will retroactively cover here.
We made Gigabyte aware of an unnecessarily high auto vCore table back in December, prior to the launch and NDA lift of Kaby Lake processors. By the time of review, that still hadn’t been resolved, and we noted in our Gigabyte Aorus Z270X Gaming 7 review that we’d revisit thermals if the company issued an update. Today, we’re doing just that. Gigabyte passed relevant information along to engineering teams and worked quickly to resolve the high auto vCore (and thus high CPU temperatures) on the Gaming 7 motherboard.
We’ve been impressed with Gigabyte’s responses overall. The representatives have been exceptionally helpful in troubleshooting the issue, and were open ears when we presented our initial concerns. The quick turn-around time on a BIOS update and subsequent auto vCore reduction shows that they’re listening, which is more than we can say for a lot of companies in this business. In an industry where it’s easier to jam fingers in ears and ignore a problem, Gigabyte’s fixed this one.
Every now and then, a new marketing gimmick comes along that feels a little untested. MSI’s latest M.2 heat shield always struck us as high on the list of potentially untested marketing claims. The idea that the “shield” can perform two opposing functions – shielding an SSD from external heat while somehow simultaneously sinking heat from within – seems like it’s written by marketing, not by engineering.
From a “shielding” standpoint, it might make sense; if you’ve got a second video card socketed above the M.2 SSD and dumping heat onto it, a shield could in fact help keep heat from touching SMT components. This would include Flash modules and controllers that may otherwise be in a direct heat path. From a heat sinking standpoint, a separate M.2 heatsink would also make sense. M.2 SSDs are notoriously hot resultant of their lower surface area and general lack of housing (ignoring the M8Pe and similar devices), and running high temperatures in a case with unfavorable ambient will result in throttled performance. MSI thought that adding this “shield” to the M.2 slot would solve the issue of hot M.2 SSDs, but it’s got a few problems that don’t even require testing to understand: (1) the “shield” (or sink, whatever) doesn’t enshroud the underside of the M.2 device, where SMDs will likely be present; (2) the cover is designed more like a shield than a sink (despite MSI’s marketing language – see below), and that means we’ve got limited surface area with zero dissipation potential.
In the latest feature from overclocker Buildzoid, we follow-up on our full review of the Gigabyte Z270X Gaming 7 motherboard with a VRM analysis of the motherboard. The Gigabyte Gaming 7 of the Z270X family, ready for Kaby Lake, is one of the pricier boards at $240 and attempts to justify its cost in two ways: Overclocking features and RGB LEDs (naturally).
To use any processing product for six years is a remarkable feat. GPUs struggle to hang on for that amount of time. You’d be reducing graphics settings heavily after the second or third year, and likely considering an upgrade around the same time. Intel’s CPUs are different – they don’t change much, and we almost always recommend skipping at least one generation between upgrades (for the gaming audience, anyway). The 7700K increased temperatures substantially and didn’t increase performance in-step, making it a pointless upgrade for any owners of the i7-6700K or i7-4690K.
We did remark in the review that owners of the 2500K and 2600K may want to consider finally moving up to Kaby Lake, but if we think about that for a second, it almost seems ridiculous: Sandy Bridge is an architecture from 2011. The i5-2500K came out in 1Q11, making it about six years old as of 2017. That is some serious staying power. Intel shows gains less than 10% generationally with almost absolute certainty. We see double-digits jumps in Blender performance and some production workloads, but that is still not an occurrence with every architecture launch. With gaming, based on the 6700K to 7700K jump, you’re lucky to get more than 1.5-3% extra performance. That’s counting frametime performance, too.
AMD’s architectural jumps should be a different story, in theory, but that’s mostly because Zen is planted 5 years after the launch of the FX-8350. AMD did have subsequent clock-rate increases and various rebadges or power efficiency improvements (FX-8370, FX 8320E), but those weren’t really that exciting for existing owners of 8000-series CPUs. In that regard, it’s the same story as Intel. AMD’s Ryzen will certainly post large gains over AMD’s last architecture given the sizeable temporal gap between launches, but we still have no idea how the next iteration will scale. It could well be just as slow as Intel’s scaling, depending on what architectural and process walls AMD may run into.
That’s not really the point of this article, though; today, we’re looking at whether it’s finally time to upgrade the i5-2500K CPU. Owners of the i5-2500K did well to buy one, it turns out, because the only major desire to upgrade would likely stem from a want of more I/O options (like M.2, NVMe, and USB3.1 Gen2 support). Hard performance is finally becoming a reason to upgrade, as we’ll show, but we’d still rank changes to HSIO as the biggest driver in upgrade demand. In the time since 2011, PCIe Gen3 has proliferated across all relevant platforms, USB3.x ports have increased to double-digits on some boards, M.2 and NVMe have entered the field of SSDs, and SATA III is on its way out as a storage interface.