Hardware Guides

The playful Nintendo noises emitted from our Switch came as somewhat of a surprise following an extensive tear-down and re-assembly process. Alas, the console does still work, and we left behind breadcrumbs of our dissection within the body of the Switch: a pair of thermocouples mounted to the top-center of the SOC package and one memory package. We can’t get software-level diode readings of the SOC’s internal sensors, particularly given the locked-down nature of a console like Nintendo’s, and so thermal probes allow us the best insight as to the console’s temperature performance. As a general rule, thermal performance is hard to keep in perspective without a comparative metric, so we need something else. That’ll be noise, for this one; we’re testing dBA output of the fan versus an effective tCase on the SOC to determine how the fan ramps.

There’s no good way to measure the Switch’s GPU frequency without hooking up equipment we don’t have, so we won’t be able to plot a frequency versus temperature/time chart. Instead, we’re looking at temperature versus noise, then using ad-hoc testing to observationally determine framerate response to various logged temperatures. Until a point at which we’ve developed tools for monitoring console FPS externally, this is the best combination of procedures we can muster.

We’ve fixed the GTX 1080 Ti Founders Edition ($700) card. As stated in the initial review, the card performed reasonably close to nVidia’s “35% > 1080” metric when at 4K resolutions, but generally fell closer to 25-30% faster at 4K. That’s really not bad – but it could be better, even with the reference PCB. It’s the cooler that’s holding nVidia’s card back, as seems to be the trend given GPU Boost 3.0 + FE cooler designs. A reference card is more versatile for deployment to the SIs and wider channel, but for our audience, we can rebuild it. We have the technology.

“Technology,” here, mostly meaning “propylene glycol.”

When we first began benchmarking Ryzen CPUs, we already had a suspicion that disabling simultaneous multithreading might give a performance boost in games, mirroring the effects of disabling hyperthreading (Intel’s specific twist on SMT) when it was first introduced. Although hyperthreading now has a generally positive effect in our benchmarks, there was a time when it wasn’t accounted for by developers—presumably partly related to what’s happening to AMD now.

In fact, turning SMT off offered relatively minor gaming performance increases outside of Total War: Warhammer—but any increase at all is notable when turning off a feature that’s designed to be positive. Throughout our testing, the most dramatic change in results we saw were from overclocking, specifically on the low stock frequency R7 1700 ($330). This led many readers to ask questions like “why didn’t you test with an overclock and SMT disabled at the same time?” and “how much is Intel paying you?” to which the answers are “time constraints” and “not enough, apparently,” since we’ve now performed those tests. Testing a CPU takes a lot of time. Now, with more time to work on Ryzen, we’ve finally begun revisiting some EFI, clock behavior, and SMT tests.

As with any new technology, the early days of Ryzen have been filled with a number of quirks as manufacturers and developers scramble to support AMD’s new architecture.

For optimal performance, AMD has asked reviewers to update to the latest BIOS version and to set Windows to “high performance” mode, which raises the minimum processor state to its base frequency (normally, the CPU would downclock when idle). These are both reasonable allowances to make for new hardware, although high-performance mode should only be a temporary fix. More on that later, though we’ve already explained it in the R7 1700 review.

This is quick-and-dirty testing. This is the kind of information we normally keep internal for research as we build a test platform, as it's never polished enough to publish and primarily informs our reviewing efforts. Given the young age of Ryzen, we're publishing our findings just to add data to a growing pool. More data points should hopefully assist other reviewers and manufacturers in researching performance “anomalies” or differences.

The below is comprised of early numbers we ran on performance vs. balanced mode, Gigabyte BIOS revisions, ASUS' board, and clock behavior when under various boost states. Methodology won't be discussed here, as it's really not any different from our 1700 and 1800X review, other than toggling of the various A/B test states defined in headers below.

Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.

This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.

Here's part 1:

Differences Between DDR4 & GDDR5 Memory

By Published March 05, 2017 at 9:30 pm

The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?

In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.

While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.

Nintendo Switch Dock & Joycon Tear-Down

By Published March 04, 2017 at 7:02 pm

While we work on our R7 1700 review, we’ve also been tearing down the remainder of the new Nintendo Switch console ($300). The first part of our tear-down series featured the Switch itself – a tablet, basically, that is somewhat familiar to a Shield – and showed the Tegra X1 modified SOC, what we think is 4GB of RAM, and a Samsung eMMC module. Today, we’re tearing down the Switch right Joycon (with the IR sensor) and docking station, hoping to see what’s going on under the hood of two parts largely undocumented by Nintendo.

The Nintendo Switch dock sells for $90 from Nintendo directly, and so you’d hope it’s a little more complex than a simple docking station. The article carries on after the embedded video:

Ryzen, Vega, and 1080 Ti news has flanked another major launch in the hardware world, though this one is outside of the PC space: Nintendo’s Switch, formerly known as the “NX.”

We purchased a Nintendo Switch ($300) specifically for teardown, hoping to document the process for any future users wishing to exercise their right to repair. Thermal compound replacement, as we learned from this teardown, is actually not too difficult. We work with small form factor boxes all the time, normally laptops, and replace compound every few years on our personal machines. There have certainly been consoles in the past that benefited from eventual thermal compound replacements, so perhaps this teardown will help in the event someone’s Switch encounters a similar scenario.

Not long ago, we opened discussion about AMD’s new OCAT tool, a software overhaul of PresentMon that we had beta tested for AMD pre-launch. In the interim, and for the past five or so months, we’ve also been silently testing a new version of FCAT that adds functionality for VR benchmarking. This benchmark suite tackles the significant challenges of intercepting VR performance data, further offering new means of analyzing warp misses and drop frames. Finally, after several months of testing, we can talk about the new FCAT VR hardware and software capture utilities.

This tool functions in two pieces: Software and hardware capture.

Revisiting an article from GN days of yore, GamersNexus endeavored to explain the differences between Western Digital’s WD Blue, Black, Red, and Purple hard drives. In this content, we also explain the specs and differences between WD Green vs. Blue & Black SSDs. In recent years, Western Digital’s product stack as changed considerably, as has the HDD market in general. We’ve found it fitting to resurrect this WD Blue, Black, Green, Red, and Purple drive naming scheme explanation. We’ll talk about the best drives for each purpose (e.g. WD Blue vs. Black for gaming), then dig into the new SSDs.

Unchanged over the years is Western Digital’s affinity for deferring to colors as to identify products, where other HDD vendors prefer fantastic creature names (BarraCuda, IronWolf, SkyHawk, etc.). As stated above, Western Digital has seriously changed its lineup. The WD Green drives have been painted blue, as they’ve been folded into the WD Blue umbrella. Furthermore, the WD Blue brand has seen the addition of an SSHD offering and SSDs in both 2.5” and M.2 form factors. This in no small part thanks to Western Digital’s acquisition of SanDisk—another notable development since our last article. With that, the WD Blue brand has expanded to become Western Digital’s most comprehensive mainstream product line-up.

Other changes to the Western Digital rainbow include the expanding of WD Black, and confusingly enough, WD Green brands. Starting with the latter, Western Digital rebranded all WD Green HDDs as WD Blue, selling WD Blues under two different RPMs, but recently reentered the SSD market with both. However, the WD Green SSDs are currently unavailable, perhaps due to the global NAND shortage. Likewise, the WD Black series has spilled over into the realm of NVMe/PCIe based storage and WD Black HDDs have expanded capacities up to 6TB; that’s quite a change from the 4TB flagship model we covered back in 2014. Lastly, there is WD Purple, of which we will retroactively cover here.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge