As with any new technology, the early days of Ryzen have been filled with a number of quirks as manufacturers and developers scramble to support AMD’s new architecture.
For optimal performance, AMD has asked reviewers to update to the latest BIOS version and to set Windows to “high performance” mode, which raises the minimum processor state to its base frequency (normally, the CPU would downclock when idle). These are both reasonable allowances to make for new hardware, although high-performance mode should only be a temporary fix. More on that later, though we’ve already explained it in the R7 1700 review.
This is quick-and-dirty testing. This is the kind of information we normally keep internal for research as we build a test platform, as it's never polished enough to publish and primarily informs our reviewing efforts. Given the young age of Ryzen, we're publishing our findings just to add data to a growing pool. More data points should hopefully assist other reviewers and manufacturers in researching performance “anomalies” or differences.
The below is comprised of early numbers we ran on performance vs. balanced mode, Gigabyte BIOS revisions, ASUS' board, and clock behavior when under various boost states. Methodology won't be discussed here, as it's really not any different from our 1700 and 1800X review, other than toggling of the various A/B test states defined in headers below.
Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.
This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.
Here's part 1:
The GTX 1080 Ti posed a fun opportunity to roll-out our new GPU test bench, something we’ve been working on since end of last year. The updated bench puts a new emphasis on thermal testing, borrowing methodology from our EVGA ICX review, and now analyzes cooler efficacy as it pertains to non-GPU components (read: MOSFETs, backplate, VRAM).
In addition to this, of course, we’ll be conducting a new suite of game FPS benchmarks, running synthetics, and preparing for overclocking and noise. The last two items won’t make it into today’s content given PAX being hours away, but they’re coming. We will be starting our Hybrid series today, for fans of that. Check here shortly for that.
If it’s not obvious, we’re reviewing nVidia’s GTX 1080 Ti Founders Edition card today, follow-up to the GTX 1080 and gen-old 980 Ti. Included on the benches are the 1080, 1080 Ti, 1070, 980 Ti, and in some, an RX 480 to represent the $250 market. We’re still adding cards to this brand new bench, but that’s where we’re starting. Please exercise patience as we continue to iterate on this platform and build a new dataset. Last year’s was built up over an entire launch cycle.
Mass Effect: Andromeda is set to release in North America on March 21st, while Europe is set for March 23rd arrival. With fewer than three weeks before release, BioWare/EA and nVidia have released more information about the graphics settings and options for PC, 4K screenshots using Ansel, and HDR in Mass Effect: Andromeda.
BioWare/EA recently put out the minimum and recommended system requirements for the PC version of Mass Effect: Andromeda, and nVidia followed-up with a preview of the graphical options menu. Users will be able to change and customize 16 graphical settings, including:
With nVidia’s recent GTX 1080Ti announcement and GTX 1080 price cut, graphics cards have seen reductions in cost this week. As stated in our last sales post, hardware sales are hard to come by right now, but we have still found some deals worth noting. We found an RX 480 8GB for $200, and a GTX 1080 for $500. DDR4 prices are still high, but some savings can be had on a couple of kits of DDR4 by G.SKILL.
AMD’s R7 1700 CPU ($330) immediately positions itself in a more advantaged segment than its $500 1800X companion, which proved poor value for pure gaming machines in our tests. Of course, as we said previously (page 5, 8), the 1800X makes more sense for our tested production tasks than the $1000 6900K when considering price:performance. For gaming, both are poor choices; the 1800X performs on par with i5 CPUs in game benchmarks, and the 6900K is $1000. It’s about value, not raw performance: Multiplicative increments in price to achieve performance equivalence (gaming) to cheaper chips is not good value. Before venturing into the 1440p/4K argument, we’d encourage you to read this review. The R7 1700 – by nature of that very argument, but also by nature of a trivial overclock – effectively invalidates the 1800X for gaming machines, finally granting AMD its champion for Ryzen.
We are also restricting this review to one page, as a significant portion of readers had unfortunately skipped straight to the gaming results page without context. It’s not as good for formatting or page load times, but it’ll hopefully ensure the other content is at least scrolled past, even if still ignored altogether.
Enough of that.
In this AMD R7 1700 review, we look at the price-to-performance of AMD’s new $330 CPU, which was explicitly marketed as an i7-7700K counter in price/performance when presented at AMD’s tech day. We’re benchmarking the R7 1700 in our usual suite of gaming, synthetic, and render tasks, quickly validating average auto voltages and temperatures along the way. Overclocks and SMT toggling further complicate testing, but provide a look at how the R7 1700 is capable of eliminating the gap between AMD’s own flagship and its more affordable SKU.
The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?
In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.
While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.
While we work on our R7 1700 review, we’ve also been tearing down the remainder of the new Nintendo Switch console ($300). The first part of our tear-down series featured the Switch itself – a tablet, basically, that is somewhat familiar to a Shield – and showed the Tegra X1 modified SOC, what we think is 4GB of RAM, and a Samsung eMMC module. Today, we’re tearing down the Switch right Joycon (with the IR sensor) and docking station, hoping to see what’s going on under the hood of two parts largely undocumented by Nintendo.
The Nintendo Switch dock sells for $90 from Nintendo directly, and so you’d hope it’s a little more complex than a simple docking station. The article carries on after the embedded video:
Ryzen, Vega, and 1080 Ti news has flanked another major launch in the hardware world, though this one is outside of the PC space: Nintendo’s Switch, formerly known as the “NX.”
We purchased a Nintendo Switch ($300) specifically for teardown, hoping to document the process for any future users wishing to exercise their right to repair. Thermal compound replacement, as we learned from this teardown, is actually not too difficult. We work with small form factor boxes all the time, normally laptops, and replace compound every few years on our personal machines. There have certainly been consoles in the past that benefited from eventual thermal compound replacements, so perhaps this teardown will help in the event someone’s Switch encounters a similar scenario.
We already explained this amply in our AMD Ryzen R7 1800X review, primarily on pages 2 and 3 (but also throughout the article), but it's worth highlighting in video form for folks who prefer not to read articles. It's unfortunate that the test methodology and logistical pages were largely overlooked in the review -- most folks just jumped straight to the conclusion or gaming results, sadly -- so we are highlighting again, in video format, some of the things discussed on those pages.
As stated several times in this new video, we strongly encourage checking out the article. We are delaying our R7 1700 review by a day because of the addition of this video to our release schedule. There's not much more to say here, so we'll just embed that below: