Hardware Guides

As with any new technology, the early days of Ryzen have been filled with a number of quirks as manufacturers and developers scramble to support AMD’s new architecture.

For optimal performance, AMD has asked reviewers to update to the latest BIOS version and to set Windows to “high performance” mode, which raises the minimum processor state to its base frequency (normally, the CPU would downclock when idle). These are both reasonable allowances to make for new hardware, although high-performance mode should only be a temporary fix. More on that later, though we’ve already explained it in the R7 1700 review.

This is quick-and-dirty testing. This is the kind of information we normally keep internal for research as we build a test platform, as it's never polished enough to publish and primarily informs our reviewing efforts. Given the young age of Ryzen, we're publishing our findings just to add data to a growing pool. More data points should hopefully assist other reviewers and manufacturers in researching performance “anomalies” or differences.

The below is comprised of early numbers we ran on performance vs. balanced mode, Gigabyte BIOS revisions, ASUS' board, and clock behavior when under various boost states. Methodology won't be discussed here, as it's really not any different from our 1700 and 1800X review, other than toggling of the various A/B test states defined in headers below.

Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.

This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.

Here's part 1:

Differences Between DDR4 & GDDR5 Memory

By Published March 05, 2017 at 9:30 pm

The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?

In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.

While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.

Nintendo Switch Dock & Joycon Tear-Down

By Published March 04, 2017 at 7:02 pm

While we work on our R7 1700 review, we’ve also been tearing down the remainder of the new Nintendo Switch console ($300). The first part of our tear-down series featured the Switch itself – a tablet, basically, that is somewhat familiar to a Shield – and showed the Tegra X1 modified SOC, what we think is 4GB of RAM, and a Samsung eMMC module. Today, we’re tearing down the Switch right Joycon (with the IR sensor) and docking station, hoping to see what’s going on under the hood of two parts largely undocumented by Nintendo.

The Nintendo Switch dock sells for $90 from Nintendo directly, and so you’d hope it’s a little more complex than a simple docking station. The article carries on after the embedded video:

Ryzen, Vega, and 1080 Ti news has flanked another major launch in the hardware world, though this one is outside of the PC space: Nintendo’s Switch, formerly known as the “NX.”

We purchased a Nintendo Switch ($300) specifically for teardown, hoping to document the process for any future users wishing to exercise their right to repair. Thermal compound replacement, as we learned from this teardown, is actually not too difficult. We work with small form factor boxes all the time, normally laptops, and replace compound every few years on our personal machines. There have certainly been consoles in the past that benefited from eventual thermal compound replacements, so perhaps this teardown will help in the event someone’s Switch encounters a similar scenario.

Not long ago, we opened discussion about AMD’s new OCAT tool, a software overhaul of PresentMon that we had beta tested for AMD pre-launch. In the interim, and for the past five or so months, we’ve also been silently testing a new version of FCAT that adds functionality for VR benchmarking. This benchmark suite tackles the significant challenges of intercepting VR performance data, further offering new means of analyzing warp misses and drop frames. Finally, after several months of testing, we can talk about the new FCAT VR hardware and software capture utilities.

This tool functions in two pieces: Software and hardware capture.

Revisiting an article from GN days of yore, GamersNexus endeavored to explain the differences between Western Digital’s WD Blue, Black, Red, and Purple hard drives. In this content, we also explain the specs and differences between WD Green vs. Blue & Black SSDs. In recent years, Western Digital’s product stack as changed considerably, as has the HDD market in general. We’ve found it fitting to resurrect this WD Blue, Black, Green, Red, and Purple drive naming scheme explanation. We’ll talk about the best drives for each purpose (e.g. WD Blue vs. Black for gaming), then dig into the new SSDs.

Unchanged over the years is Western Digital’s affinity for deferring to colors as to identify products, where other HDD vendors prefer fantastic creature names (BarraCuda, IronWolf, SkyHawk, etc.). As stated above, Western Digital has seriously changed its lineup. The WD Green drives have been painted blue, as they’ve been folded into the WD Blue umbrella. Furthermore, the WD Blue brand has seen the addition of an SSHD offering and SSDs in both 2.5” and M.2 form factors. This in no small part thanks to Western Digital’s acquisition of SanDisk—another notable development since our last article. With that, the WD Blue brand has expanded to become Western Digital’s most comprehensive mainstream product line-up.

Other changes to the Western Digital rainbow include the expanding of WD Black, and confusingly enough, WD Green brands. Starting with the latter, Western Digital rebranded all WD Green HDDs as WD Blue, selling WD Blues under two different RPMs, but recently reentered the SSD market with both. However, the WD Green SSDs are currently unavailable, perhaps due to the global NAND shortage. Likewise, the WD Black series has spilled over into the realm of NVMe/PCIe based storage and WD Black HDDs have expanded capacities up to 6TB; that’s quite a change from the 4TB flagship model we covered back in 2014. Lastly, there is WD Purple, of which we will retroactively cover here.

We made Gigabyte aware of an unnecessarily high auto vCore table back in December, prior to the launch and NDA lift of Kaby Lake processors. By the time of review, that still hadn’t been resolved, and we noted in our Gigabyte Aorus Z270X Gaming 7 review that we’d revisit thermals if the company issued an update. Today, we’re doing just that. Gigabyte passed relevant information along to engineering teams and worked quickly to resolve the high auto vCore (and thus high CPU temperatures) on the Gaming 7 motherboard.

We’ve been impressed with Gigabyte’s responses overall. The representatives have been exceptionally helpful in troubleshooting the issue, and were open ears when we presented our initial concerns. The quick turn-around time on a BIOS update and subsequent auto vCore reduction shows that they’re listening, which is more than we can say for a lot of companies in this business. In an industry where it’s easier to jam fingers in ears and ignore a problem, Gigabyte’s fixed this one.

Here’s the original board review with the temperature criticisms, something we also talked about in our 7700K review.

Every now and then, a new marketing gimmick comes along that feels a little untested. MSI’s latest M.2 heat shield always struck us as high on the list of potentially untested marketing claims. The idea that the “shield” can perform two opposing functions – shielding an SSD from external heat while somehow simultaneously sinking heat from within – seems like it’s written by marketing, not by engineering.

From a “shielding” standpoint, it might make sense; if you’ve got a second video card socketed above the M.2 SSD and dumping heat onto it, a shield could in fact help keep heat from touching SMT components. This would include Flash modules and controllers that may otherwise be in a direct heat path. From a heat sinking standpoint, a separate M.2 heatsink would also make sense. M.2 SSDs are notoriously hot resultant of their lower surface area and general lack of housing (ignoring the M8Pe and similar devices), and running high temperatures in a case with unfavorable ambient will result in throttled performance. MSI thought that adding this “shield” to the M.2 slot would solve the issue of hot M.2 SSDs, but it’s got a few problems that don’t even require testing to understand: (1) the “shield” (or sink, whatever) doesn’t enshroud the underside of the M.2 device, where SMDs will likely be present; (2) the cover is designed more like a shield than a sink (despite MSI’s marketing language – see below), and that means we’ve got limited surface area with zero dissipation potential.

In the latest feature from overclocker Buildzoid, we follow-up on our full review of the Gigabyte Z270X Gaming 7 motherboard with a VRM analysis of the motherboard. The Gigabyte Gaming 7 of the Z270X family, ready for Kaby Lake, is one of the pricier boards at $240 and attempts to justify its cost in two ways: Overclocking features and RGB LEDs (naturally).

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge