The GTX 1080 Ti posed a fun opportunity to roll-out our new GPU test bench, something we’ve been working on since end of last year. The updated bench puts a new emphasis on thermal testing, borrowing methodology from our EVGA ICX review, and now analyzes cooler efficacy as it pertains to non-GPU components (read: MOSFETs, backplate, VRAM).
In addition to this, of course, we’ll be conducting a new suite of game FPS benchmarks, running synthetics, and preparing for overclocking and noise. The last two items won’t make it into today’s content given PAX being hours away, but they’re coming. We will be starting our Hybrid series today, for fans of that. Check here shortly for that.
If it’s not obvious, we’re reviewing nVidia’s GTX 1080 Ti Founders Edition card today, follow-up to the GTX 1080 and gen-old 980 Ti. Included on the benches are the 1080, 1080 Ti, 1070, 980 Ti, and in some, an RX 480 to represent the $250 market. We’re still adding cards to this brand new bench, but that’s where we’re starting. Please exercise patience as we continue to iterate on this platform and build a new dataset. Last year’s was built up over an entire launch cycle.
Mass Effect: Andromeda is set to release in North America on March 21st, while Europe is set for March 23rd arrival. With fewer than three weeks before release, BioWare/EA and nVidia have released more information about the graphics settings and options for PC, 4K screenshots using Ansel, and HDR in Mass Effect: Andromeda.
BioWare/EA recently put out the minimum and recommended system requirements for the PC version of Mass Effect: Andromeda, and nVidia followed-up with a preview of the graphical options menu. Users will be able to change and customize 16 graphical settings, including:
With nVidia’s recent GTX 1080Ti announcement and GTX 1080 price cut, graphics cards have seen reductions in cost this week. As stated in our last sales post, hardware sales are hard to come by right now, but we have still found some deals worth noting. We found an RX 480 8GB for $200, and a GTX 1080 for $500. DDR4 prices are still high, but some savings can be had on a couple of kits of DDR4 by G.SKILL.
AMD’s R7 1700 CPU ($330) immediately positions itself in a more advantaged segment than its $500 1800X companion, which proved poor value for pure gaming machines in our tests. Of course, as we said previously (page 5, 8), the 1800X makes more sense for our tested production tasks than the $1000 6900K when considering price:performance. For gaming, both are poor choices; the 1800X performs on par with i5 CPUs in game benchmarks, and the 6900K is $1000. It’s about value, not raw performance: Multiplicative increments in price to achieve performance equivalence (gaming) to cheaper chips is not good value. Before venturing into the 1440p/4K argument, we’d encourage you to read this review. The R7 1700 – by nature of that very argument, but also by nature of a trivial overclock – effectively invalidates the 1800X for gaming machines, finally granting AMD its champion for Ryzen.
We are also restricting this review to one page, as a significant portion of readers had unfortunately skipped straight to the gaming results page without context. It’s not as good for formatting or page load times, but it’ll hopefully ensure the other content is at least scrolled past, even if still ignored altogether.
Enough of that.
In this AMD R7 1700 review, we look at the price-to-performance of AMD’s new $330 CPU, which was explicitly marketed as an i7-7700K counter in price/performance when presented at AMD’s tech day. We’re benchmarking the R7 1700 in our usual suite of gaming, synthetic, and render tasks, quickly validating average auto voltages and temperatures along the way. Overclocks and SMT toggling further complicate testing, but provide a look at how the R7 1700 is capable of eliminating the gap between AMD’s own flagship and its more affordable SKU.
The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?
In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.
While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.
While we work on our R7 1700 review, we’ve also been tearing down the remainder of the new Nintendo Switch console ($300). The first part of our tear-down series featured the Switch itself – a tablet, basically, that is somewhat familiar to a Shield – and showed the Tegra X1 modified SOC, what we think is 4GB of RAM, and a Samsung eMMC module. Today, we’re tearing down the Switch right Joycon (with the IR sensor) and docking station, hoping to see what’s going on under the hood of two parts largely undocumented by Nintendo.
The Nintendo Switch dock sells for $90 from Nintendo directly, and so you’d hope it’s a little more complex than a simple docking station. The article carries on after the embedded video:
Ryzen, Vega, and 1080 Ti news has flanked another major launch in the hardware world, though this one is outside of the PC space: Nintendo’s Switch, formerly known as the “NX.”
We purchased a Nintendo Switch ($300) specifically for teardown, hoping to document the process for any future users wishing to exercise their right to repair. Thermal compound replacement, as we learned from this teardown, is actually not too difficult. We work with small form factor boxes all the time, normally laptops, and replace compound every few years on our personal machines. There have certainly been consoles in the past that benefited from eventual thermal compound replacements, so perhaps this teardown will help in the event someone’s Switch encounters a similar scenario.
We already explained this amply in our AMD Ryzen R7 1800X review, primarily on pages 2 and 3 (but also throughout the article), but it's worth highlighting in video form for folks who prefer not to read articles. It's unfortunate that the test methodology and logistical pages were largely overlooked in the review -- most folks just jumped straight to the conclusion or gaming results, sadly -- so we are highlighting again, in video format, some of the things discussed on those pages.
As stated several times in this new video, we strongly encourage checking out the article. We are delaying our R7 1700 review by a day because of the addition of this video to our release schedule. There's not much more to say here, so we'll just embed that below:
Intel has enjoyed relatively unchallenged occupancy of the enthusiast CPU market for several years now. If you mark the FX-8350 as the last major play prior to subsequent refreshes (like the FX-8370), that marks the last major AMD CPU launch as 2012. Of course, later launches in the FX-9000 series and FX-8000 series updates have been made, but there has not been an architectural push since the Bulldozer/Piledriver/Steamroller series.
AMD Ryzen, then, has understandably generated an impregnable wall of excitement from the enthusiast community. This is AMD’s chance to recover a market it once dominated, back in the Athlon x64 days, and reestablish itself in a position that minimally targets parity in price to performance. That’s all AMD needs: Parity. Or close to it, anyway, while maintaining comparable pricing to Intel. With Intel’s stranglehold lasting as long as it has, builders are ready to support an alternative in the market. It’s nice to claim “best” on some charts, like AMD has done with Cinebench, but AMD doesn’t have to win: they have to tie. The momentum to shift is there.
Even RTG competitor nVidia will benefit from this upgrade cycle. That’s not something you hear a lot – nVidia wanting AMD to do well with a launch – but here, it makes sense. A dump of new systems into the ecosystem means everyone experiences revenue growth. People need to buy new GPUs, new cases, new coolers, and new RAM to accompany any moves to Ryzen. Misalignment of Vega and Ryzen make sense in the sense of not smothering one announcement with the other, but does mean that AMD is now rapidly moving toward Vega’s launch. Those R7 CPUs don’t necessarily fit best with an RX 480; it’s a fine card, just not something you stick with a $400-$500 CPU. Two major launches in short order, then, one of which potentially drives system refreshes.
AMD must feel the weight borne by Atlas at this moment.
In this ~11,000 word review of AMD’s Ryzen R7 1800X, we’ll look at FPS benchmarking, Premiere & Blender workloads, thermals and voltage, and logistical challenges. (Update: 1700 review here).
Not long ago, we opened discussion about AMD’s new OCAT tool, a software overhaul of PresentMon that we had beta tested for AMD pre-launch. In the interim, and for the past five or so months, we’ve also been silently testing a new version of FCAT that adds functionality for VR benchmarking. This benchmark suite tackles the significant challenges of intercepting VR performance data, further offering new means of analyzing warp misses and drop frames. Finally, after several months of testing, we can talk about the new FCAT VR hardware and software capture utilities.
This tool functions in two pieces: Software and hardware capture.