NVidia’s Titan Xp 2017 model video card was announced without any pre-briefing for us, marking it the second recent Titan X model card that took us by surprise on launch day. The Titan Xp, as it turns out, isn’t necessarily targeted at gaming – though it does still bear the GeForce GTX mark. NVidia’s Titan Xp followed the previous Titan X (that we called “Titan XP” to reduce confusion from the Titan X – Maxwell before that), and knocks the Titan X 2016 out of its $1200 price bracket.
The Titan Xp 2017 now firmly socketed into the $1200 category, we’ve got a gap between the GTX 1080 Ti at $700 MSRP ($750 common price) of $450-$500 to the TiXp. Even with that big of a gap, though, diminishing returns in gaming or consumer workloads are to be expected. Today, we’re benchmarking and reviewing the nVidia Titan Xp for gaming specifically, with additional thermal, power, and noise tests included. This card may be better deployed for neural net and deep learning applications, but that won’t stop enthusiasts from buying it simply to have “the best.” For them, we’d like to have some benchmarks online.
EVGA’s GTX 1080 Ti SC2 ($720) card uses the same ICX cooler that we reviewed back in February, where we intensely detailed how the new solution works (including information on the negative type thermistors and accuracy validation of those sensors). To get caught-up on ICX, we’d strongly recommend reading the first page of that review, and then maybe checking the thermal analysis for A/B testing versus ACX in an identical environment. As a fun add, we’re also A/B testing the faceplate – it’s got all those holes in it, so we thought we’d close them off and see if they actually help with cooling.
The fast version is basically this: EVGA, responding to concerns about ACX last year, decided to fully reinvent its flagship cooler to better monitor and cool power components in addition to the GPU component. The company did this by introducing NTC thermistors to its PCB, used for measuring GPU backside temperature (rather useless in a vacuum, but more of a validation thing when considering last year’s backplate testing), memory temperature, and power component temperature. There are thermistors placed adjacent to 5 MOSFETs, 3 memory modules, and the GPU backside. The thermistors are not embedded in the package, but placed close enough to get an accurate reading for thermals in each potential hotspot. We previously validated these thermistors versus our own thermocouples, finding that EVGA’s readings were accurate to reality.
Although this is absolutely a unique, innovative approach to GPU cooling – no one else does it, after all – we found its usefulness to primarily be relegated to noise output. After all, a dual-fan ACX cooler was already enough to keep the GPU cool (and FETs, with the help of some thermal pads), and ICX is still a dual-fan cooler. The ICX sensors primarily add a toy for enthusiasts to play with, as it won’t improve gaming performance in any meaningful way, though those enthusiasts could benefit from fine-tuning the fan curve to reduce VRM fan speeds. This would benefit in noise levels, as the VRM fan doesn’t need to spin all that fast (FETs can take ~125C heat before they start losing efficiency in any meaningful way), and so the GPU + VRM fans can spin asynchronously to help with the noise profile. Out of box, EVGA’s fan curve is a bit aggressive, we think – but we’ll talk about that later.
AMD’s Polaris refresh primarily features a BIOS overhaul, which assists in power management during idle or low-load workloads, but also ships with natively higher clocks and additional overvoltage headroom. Technically, an RX 400-series card could be flashed to its 500-series counterpart, though we haven’t begun investigation into that just yet. The reasoning, though, is because the change between the two series is so small; this is not meant to be an upgrade for existing 400-series users, but an option for buyers in the market for a completely new system.
We’ve already reviewed the RX 580 line by opening up with our MSI RX 580 Gaming X review, a $245 card that competes closely with the EVGA GTX 1060 SSC ($250) alternative from nVidia. Performance was on-point to provide back-and-forth trades depending on games, with power draw boosted over the 400 series when under load, or lowered when idle. This review of the Gigabyte RX 570 4GB Aorus card benchmarks performance versus the RX 470, 480, 580, and GTX 1050 Ti and 1060 cards. We're looking at power consumption, thermals, and FPS.
There’s no new architecture to speak of here. Our RX 480 initial review from last year covers all relevant aspects of architecture for the RX 500 series; if you’re behind on Polaris (or it’s been a while) and need a refresher on what’s happening at a silicon level, check our initial RX 480 review.
AMD’s got a new strategy: Don’t give anyone time to blink between product launches. The company’s been firing off round after round of products for the past month, starting with Ryzen 7, then Ryzen 5, and now Polaris Refresh. The product cannon will eventually be reloaded with Vega, but that’s not for today.
The RX 500 series officially arrives to market today, primarily carried in on the backs of the RX 580 and RX 570 Polaris 10 GPUs. From an architectural perspective, there’s nothing new – if you know Polaris and the RX 400 series, you know the RX 500 series. This is not an exciting, bombastic launch that requires delving into some unexplored arch; in fact, our original RX 480 review heavily detailed Polaris architecture, and that’s all relevant information to today’s RX 580 launch. If you’re not up to speed on Polaris, our review from last year is a good place to start (though the numbers are now out of date, the information is still accurate).
Both the RX 580 and RX 570 will be available as of this article’s publication. The RX 580 we’re reviewing should be listed here once retailer embargo lifts, with our RX 570 model posting here. Our RX 570 review goes live tomorrow. We’re spacing them out to allow for better per-card depth, having just come off of a series of 1080 Ti reviews (Xtreme, Gaming X).
Our Gigabyte GTX 1080 Ti Aorus Xtreme ($750) review brings us to look at one of the largest video cards in the 1080 Ti family, matching it well versus the MSI 1080 Ti Gaming X. Our tests today will look at the Aorus Xtreme GPU in thermals (most heavily), noise levels, gaming performance, and overclocking, with particular interest in the efficacy of Gigabyte’s copper insert in the backplate. The Gigabyte Aorus Xtreme is a heavyweight in all departments – size being one of them – and is priced at $750, matching the MSI Gaming X directly. A major point of differentiation is the bigger focus on RGB LEDs with Gigabyte’s model, though the three-fan design is also interesting from a thermal and noise perspective. We’ll look at that more on page 3.
We’ve already posted a tear-down of this card (and friend of the site ‘Buildzoid’ has posted his PCB analysis), but we’ll recap some of the PCB and cooler basics on this first page. The card uses a 3-fan cooler (with smaller fans than the Gaming X-type cards, but more of them) and large aluminum heatsink, ultimately taking up nearly 3 PCI-e slots. It’s the same GPU and memory underneath as all other GTX 1080 Ti cards, with differences primarily in the cooling and power management departments. Clock, of course, does have some pre-OC applied to help boost over the reference model. Gigabyte is shipping the Xtreme variant of the 1080 Ti at 1632/1746MHz (OC mode) or 1607/1721 (gaming mode), toggleable through software if not manually overclocking.
We’re reviewing the new MSI GTX 1080 Ti Gaming X card today, priced at $750 and positioned as one of the highest-performing gaming cards on the market. These tests will extensively look at thermals, given that that’s the primary differentiator between same-GPU video cards, and then look at gaming performance (in FPS) versus the Reference card and our Hybrid mod FE card. Part of our thermal testing will include performance analysis with and without a backplate. Noise levels are going to be the same as the last Twin Frozr card we tested, which can be found here.
This generation of GTX 1080 Ti cards has gone big. MSI’s Gaming X is already large, but the Gigabyte unit that we’re reviewing next is similarly big in the multi-slot department. The Gaming X uses MSI’s known twin-frozr cooler, with modifications to the underlying aluminum heatsink to increase surface area and fin density. Noise output is therefore identical to the noise output of previous Twin Frozr coolers we reviewed for the 10-series, including the GTX 1080 non-Ti Gaming X.
MSI ships the 1080 Ti Gaming X at three different frequencies, configurable through software: OC mode runs at 1683MHz boost and 1569MHz base, Gaming mode runs at 1657MHz boost and 1544MHz base, and silent mode runs at 1582 and 1480MHz.
Following our in-depth Ryzen VR benchmark (R7 1700 vs. i7-7700K with the Rift + Vive), we immediately began compiling results for the concurrent R5 test efforts by GN Sr. Editor Patrick Lathan. Working together, we were able to knock-out the VR benchmarks (check those out here – some cool data), Ryzen Revisit piece, and today’s R5 reviews.
Both the R5 1600X ($250) and R5 1500X ($190) CPUs are in for review today, primarily matched against the Intel i5-7500 and i5-7600K. For comparison reasons, we have still included other CPUs on the bench – notably the i7-7700K and R7 1700, just to give an understanding of what the extra ~$70-$130 gets.
For anyone who hasn’t checked in on our content since the initial Ryzen reviews, we’d strongly encourage checking the Ryzen Revisit piece for a better understanding of how the scene has changed since launch. That revisit looks at Windows updates (and debunks some myths), EFI updates, and memory overclocking impact on Ryzen performance.
Although we have rerun the R7 gaming benchmarks with higher memory frequency (thanks to GSkill and Geil for providing B-die kits), we have not yet rerun them in synthetic tests. The 2933MHz frequency, as a reminder, was a hard limitation on our test platforms in the initial round of R7 reviews.
We will be including that data (albeit truncated) in our new tests, alongside Intel retests for the same games. For now, though, we’re reviewing the R5 1600X and R5 1500X CPUs in the Ryzen family, priced at $250 and $190, respectively.
This first revisit to Ryzen’s performance comes earlier than most, given the tempestuous environment surrounding AMD’s latest uarch. In the past weeks, we’ve seen claims that Windows updates promise a significant boon to Ryzen performance, as has also been said of memory overclocking, and we were previously instructed that EFI updates alone should bolster performance. Perhaps not unrelated, game updates to major titles could have potentially impacted performance, amounting to a significant number of variables for a revisit.
Today’s content piece aims to isolate each of these items as much as reasonable – not all can be isolated, like game updates – to better determine the performance impact from the individual changes and updates. We’ll then progress cumulatively through charts as updates are applied. Our final set of charts will contain Windows version bxxx.970, version 1002 EFI on the CH6, and memory overclocking efforts.
In direct competition with the Be Quiet! Pure Base 600 ($90) we reviewed recently is the Fractal Define C, a compact ATX mid tower with an emphasis on noise suppression. The Fractal Define C is a relatively new launch from Fractal Design, sticking to the highly competitive ~$90 mid-tower market. Fractal’s Define C ships in micro-ATX (“Define Mini C”) and ATX form factor versions, the latter of which is on the bench today.
Our Fractal Define C review looks at the ATX-sized enclosure, taking thermals to task and testing for noise emissions in the company’s newest box. Fractal’s immediate competition at this price-point comes from the Be Quiet! Pure Base 600, NZXT S340 non-Elite, and the Corsair 400Q and 400C.
We’ve praised the R7 1700 ($330) for its mixed workload performance and overclocking capabilities at $330, and we’ve criticized the 1800X for its insignificant performance improvements (over the 1700) at $500. That leaves the R7 1700X ($400), positioned precariously between the two with a base clock of 3.4GHz, but the full 95W TDP of its 1800X sibling.
The 1700X performs as expected, given its flanks, landing between the R7 1700 and R7 1800X. All three are 8C/16T chips with the same CCX layout; refer back to our 1800X review for a more thorough description of the R7 CPU & Ryzen architecture. A quick comparison of basic stats reveals that the major advantage of the 1700X is a moderate increase in frequency, with additional XFR headroom as demarcated by the ‘X’ suffix. That said, our R7 1700 easily overclocked to a higher frequency than the native 1700X frequency, with no manual adjustment to voltage or EFI beyond the multiplier. The 1700X has a base clock of 3.4GHz and a boost clock of 3.8GHz, which theoretically means it could come close to the performance of our 3.9GHz 1700 straight out of the box while retaining the benefits of XFR (circumvented by overclocking).
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.