Hardware Guides

This content marks the beginning of our in-depth VR testing efforts, part of an ongoing test pattern that hopes to determine distinct advantages and disadvantages on today’s hardware. VR hasn’t been a high-performance content topic for us, but we believe it’s an important one for this release of Kaby Lake & Ryzen CPUs: Both brands have boasted high VR performance, “VR Ready” tags, and other marketing that hasn’t been validated – mostly because it’s hard to do so. We’re leveraging a hardware capture rig to intercept frames to the headsets, FCAT VR, and a suite of five games across the Oculus Rift & HTC Vive to benchmark the R7 1700 vs. i7-7700K. This testing includes benchmarks at stock and overclocked configurations, totaling four devices under test (DUT) across two headsets and five games. Although this is “just” 20 total tests (with multiple passes), the process takes significantly longer than testing our entire suite of GPUs. Executing 20 of these VR benchmarks, ignoring parity tests, takes several days. We could do the same count for a GPU suite and have it done in a day.

VR benchmarking is hard, as it turns out, and there are a number of imperfections in any existing test methodology for VR. We’ve got a good solution to testing that has proven reliable, but in no way do we claim that perfect. Fortunately, by combining hardware and software capture, we’re able to validate numbers for each test pass. Using multiple test passes over the past five months of working with FCAT VR, we’ve also been able to build-up a database that gives us a clear margin of error; to this end, we’ve added error bars to the bar graphs to help illustrate when results are within usual variance.

We’ve received a ton of positive feedback on our i5-2500K revisit, and we’ve received a similar amount of questions about including overclocked i7-2600K numbers in our benchmark charts. The solution is obvious: a full 2600K revisit using our modern benchmark course. As demonstrated with the 2500K, old K-SKU Sandy Bridge CPUs had impressive overclocking capacity--partly thanks to a better thermal solution than what Intel offers today--but the stock i7-2600K regularly outperformed our 4.5GHz 2500K in some tests. Synthetic benchmarks and games like Watch Dogs 2, both of which take advantage of high thread counts, are included in those tests showing favor to the 2600K.1

Although we ended the 2500K review with the conclusion that now is a good time to start thinking about an upgrade, i7 CPUs are considered as more future-proof. Today, we’re testing that conception to see how it holds up to 2017’s test suite. With Ryzen 7 now fully released, considering 2600K owners are likely looking (price-wise) at a 7700K ($345) or 1700 ($330), it makes sense to revisit SNB one more time.

Note: For anyone who saw our recent Ryzen Revisit coverage, you know that there are some fairly important changes to Total War: Warhammer and Battlefield 1 that impacted Ryzen, and could also impact Intel. We have not fully retested our suite with these changes yet, and this content was written prior to the Ryzen revisit. Still, we’re including some updated numbers in here – but it’s not really the focus of the content, we’re more interested now in seeing how the i7-2600K performs in today’s games, especially with an overclock.

The playful Nintendo noises emitted from our Switch came as somewhat of a surprise following an extensive tear-down and re-assembly process. Alas, the console does still work, and we left behind breadcrumbs of our dissection within the body of the Switch: a pair of thermocouples mounted to the top-center of the SOC package and one memory package. We can’t get software-level diode readings of the SOC’s internal sensors, particularly given the locked-down nature of a console like Nintendo’s, and so thermal probes allow us the best insight as to the console’s temperature performance. As a general rule, thermal performance is hard to keep in perspective without a comparative metric, so we need something else. That’ll be noise, for this one; we’re testing dBA output of the fan versus an effective tCase on the SOC to determine how the fan ramps.

There’s no good way to measure the Switch’s GPU frequency without hooking up equipment we don’t have, so we won’t be able to plot a frequency versus temperature/time chart. Instead, we’re looking at temperature versus noise, then using ad-hoc testing to observationally determine framerate response to various logged temperatures. Until a point at which we’ve developed tools for monitoring console FPS externally, this is the best combination of procedures we can muster.

We’ve fixed the GTX 1080 Ti Founders Edition ($700) card. As stated in the initial review, the card performed reasonably close to nVidia’s “35% > 1080” metric when at 4K resolutions, but generally fell closer to 25-30% faster at 4K. That’s really not bad – but it could be better, even with the reference PCB. It’s the cooler that’s holding nVidia’s card back, as seems to be the trend given GPU Boost 3.0 + FE cooler designs. A reference card is more versatile for deployment to the SIs and wider channel, but for our audience, we can rebuild it. We have the technology.

“Technology,” here, mostly meaning “propylene glycol.”

When we first began benchmarking Ryzen CPUs, we already had a suspicion that disabling simultaneous multithreading might give a performance boost in games, mirroring the effects of disabling hyperthreading (Intel’s specific twist on SMT) when it was first introduced. Although hyperthreading now has a generally positive effect in our benchmarks, there was a time when it wasn’t accounted for by developers—presumably partly related to what’s happening to AMD now.

In fact, turning SMT off offered relatively minor gaming performance increases outside of Total War: Warhammer—but any increase at all is notable when turning off a feature that’s designed to be positive. Throughout our testing, the most dramatic change in results we saw were from overclocking, specifically on the low stock frequency R7 1700 ($330). This led many readers to ask questions like “why didn’t you test with an overclock and SMT disabled at the same time?” and “how much is Intel paying you?” to which the answers are “time constraints” and “not enough, apparently,” since we’ve now performed those tests. Testing a CPU takes a lot of time. Now, with more time to work on Ryzen, we’ve finally begun revisiting some EFI, clock behavior, and SMT tests.

As with any new technology, the early days of Ryzen have been filled with a number of quirks as manufacturers and developers scramble to support AMD’s new architecture.

For optimal performance, AMD has asked reviewers to update to the latest BIOS version and to set Windows to “high performance” mode, which raises the minimum processor state to its base frequency (normally, the CPU would downclock when idle). These are both reasonable allowances to make for new hardware, although high-performance mode should only be a temporary fix. More on that later, though we’ve already explained it in the R7 1700 review.

This is quick-and-dirty testing. This is the kind of information we normally keep internal for research as we build a test platform, as it's never polished enough to publish and primarily informs our reviewing efforts. Given the young age of Ryzen, we're publishing our findings just to add data to a growing pool. More data points should hopefully assist other reviewers and manufacturers in researching performance “anomalies” or differences.

The below is comprised of early numbers we ran on performance vs. balanced mode, Gigabyte BIOS revisions, ASUS' board, and clock behavior when under various boost states. Methodology won't be discussed here, as it's really not any different from our 1700 and 1800X review, other than toggling of the various A/B test states defined in headers below.

Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.

This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.

Here's part 1:

Differences Between DDR4 & GDDR5 Memory

By Published March 05, 2017 at 9:30 pm

The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?

In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.

While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.

Nintendo Switch Dock & Joycon Tear-Down

By Published March 04, 2017 at 7:02 pm

While we work on our R7 1700 review, we’ve also been tearing down the remainder of the new Nintendo Switch console ($300). The first part of our tear-down series featured the Switch itself – a tablet, basically, that is somewhat familiar to a Shield – and showed the Tegra X1 modified SOC, what we think is 4GB of RAM, and a Samsung eMMC module. Today, we’re tearing down the Switch right Joycon (with the IR sensor) and docking station, hoping to see what’s going on under the hood of two parts largely undocumented by Nintendo.

The Nintendo Switch dock sells for $90 from Nintendo directly, and so you’d hope it’s a little more complex than a simple docking station. The article carries on after the embedded video:

Ryzen, Vega, and 1080 Ti news has flanked another major launch in the hardware world, though this one is outside of the PC space: Nintendo’s Switch, formerly known as the “NX.”

We purchased a Nintendo Switch ($300) specifically for teardown, hoping to document the process for any future users wishing to exercise their right to repair. Thermal compound replacement, as we learned from this teardown, is actually not too difficult. We work with small form factor boxes all the time, normally laptops, and replace compound every few years on our personal machines. There have certainly been consoles in the past that benefited from eventual thermal compound replacements, so perhaps this teardown will help in the event someone’s Switch encounters a similar scenario.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge