Our GTX 1080 Ti SC2 review was met with several comments (on YouTube, at least) asking where the FTW3 coverage was. Turns out, EVGA didn’t even have those cards until two days ago, and we had ours overnighted the same day. We’ve got initial testing under way, but wanted to share the tear-down process early to spoil some of the board. This tear-down of the EVGA GTX 1080 Ti FTW3 ($780) exposes the PCB and VRM design, fan header placement, and cooler design for the FTW3. We’re working with GN resident overclocker ‘Buildzoid’ for a full PCB + VRM analysis in the coming days, but have preliminary information at the ready.

EVGA’s 1080 Ti FTW3 is one of the most overbuilt PCBs we’ve seen in recent history. As stated in our SC2 review, the EVGA team has gone absolutely mental with thermal pad placement (following last year’s incident), and that’s carried over to the FTW3. But it’s more than just thermal pads (on literally every component, even those that have no business being cooled), it’s also the VRM design. This is a 10+2 phase card with doubling and dual FETs all across the board, using Alpha Omega Semiconductor E6930s for all the FETs. We’ll save the rest of the PCB + VRM discussion (including amperage and thermal capabilities) for Buildzoid’s deep-dive, which we highly encourage watching. That’ll go live within a few days.

NVidia’s Titan Xp 2017 model video card was announced without any pre-briefing for us, marking it the second recent Titan X model card that took us by surprise on launch day. The Titan Xp, as it turns out, isn’t necessarily targeted at gaming – though it does still bear the GeForce GTX mark. NVidia’s Titan Xp followed the previous Titan X (that we called “Titan XP” to reduce confusion from the Titan X – Maxwell before that), and knocks the Titan X 2016 out of its $1200 price bracket.

The Titan Xp 2017 now firmly socketed into the $1200 category, we’ve got a gap between the GTX 1080 Ti at $700 MSRP ($750 common price) of $450-$500 to the TiXp. Even with that big of a gap, though, diminishing returns in gaming or consumer workloads are to be expected. Today, we’re benchmarking and reviewing the nVidia Titan Xp for gaming specifically, with additional thermal, power, and noise tests included. This card may be better deployed for neural net and deep learning applications, but that won’t stop enthusiasts from buying it simply to have “the best.” For them, we’d like to have some benchmarks online.

EVGA’s GTX 1080 Ti SC2 ($720) card uses the same ICX cooler that we reviewed back in February, where we intensely detailed how the new solution works (including information on the negative type thermistors and accuracy validation of those sensors). To get caught-up on ICX, we’d strongly recommend reading the first page of that review, and then maybe checking the thermal analysis for A/B testing versus ACX in an identical environment. As a fun add, we’re also A/B testing the faceplate – it’s got all those holes in it, so we thought we’d close them off and see if they actually help with cooling.

The fast version is basically this: EVGA, responding to concerns about ACX last year, decided to fully reinvent its flagship cooler to better monitor and cool power components in addition to the GPU component. The company did this by introducing NTC thermistors to its PCB, used for measuring GPU backside temperature (rather useless in a vacuum, but more of a validation thing when considering last year’s backplate testing), memory temperature, and power component temperature. There are thermistors placed adjacent to 5 MOSFETs, 3 memory modules, and the GPU backside. The thermistors are not embedded in the package, but placed close enough to get an accurate reading for thermals in each potential hotspot. We previously validated these thermistors versus our own thermocouples, finding that EVGA’s readings were accurate to reality.

Although this is absolutely a unique, innovative approach to GPU cooling – no one else does it, after all – we found its usefulness to primarily be relegated to noise output. After all, a dual-fan ACX cooler was already enough to keep the GPU cool (and FETs, with the help of some thermal pads), and ICX is still a dual-fan cooler. The ICX sensors primarily add a toy for enthusiasts to play with, as it won’t improve gaming performance in any meaningful way, though those enthusiasts could benefit from fine-tuning the fan curve to reduce VRM fan speeds. This would benefit in noise levels, as the VRM fan doesn’t need to spin all that fast (FETs can take ~125C heat before they start losing efficiency in any meaningful way), and so the GPU + VRM fans can spin asynchronously to help with the noise profile. Out of box, EVGA’s fan curve is a bit aggressive, we think – but we’ll talk about that later.

Thanks to GamersNexus reader ‘Grant,’ we were able to obtain a loaner nVidia Titan Xp (2017) card for review and thermal analysis. Grant purchased the card for machine learning and wanted to liquid cool the GPU, which happens to be something with which we’re well-versed. In the process, we’ll be reviewing the Titan Xp from a gaming standpoint, tearing it down, analyzing the PCB & VRM, and building it back into a liquid-cooled card. All the benchmarking is already done, but we’re opening our Titan Xp content string with a tear-down of the card.

Disassembling Founders Edition nVidia graphics cards tends to be a little more tool-intensive than most other GPU tear-downs. NVidia uses 2.0mm & 2.5mm Allen keys to secure the shroud to the baseplate, and then the baseplate to the PCB; additionally, a batch of ~16x 4mm hex heads socket through the PCB and into the baseplate, each of which hosts a small Phillips screw for the backplate.

The disassembly tutorial continues after this video version:

AMD’s taken a page out of nVidia’s book, apparently, and nVidia probably took that page from Apple – or any number of other companies that elect to re-use product names. The new Radeon Pro Duo uses the same name as last year’s launch, but has updated the internals.

The RX 580, as we learned in the review process, isn’t all that different from its origins in the RX 480. The primary difference is in voltage and frequency afforded to the GPU proper, with other changes manifesting in maturation of the process over the past year of manufacturing. This means most optimizations are relegated to power (when idle – not under load) and frequency headroom. Gains on the new cards are not from anything fancy – just driving more power through under load.

Still, we were curious as to whether AMD’s drivers would permit cross-RX series multi-GPU. We decided to throw an MSI RX 580 Gaming X and MSI RX 480 Gaming X into a configuration to get things close, then see what’d happen.

The short of it is that this works. There is no explicit inhibitor built in to forbid users from running CrossFire with RX 400 and RX 500 series cards, as long as you’re doing 470/570 or 480/580. The GPU is the same, and frequency will just be matched to the slowest card, for the most part.

We think this will be a common use case, too. It makes sense: If you’re a current owner of an RX 480 and have been considering CrossFire (though we didn’t necessarily recommend it in previous content), the RX 580 will make the most sense for a secondary GPU. Well, primary, really – but you get the idea. The RX 400 series cards will see EOL and cease production in short order, if not already, which means that prices will stagnate and then skyrocket. That’s just what retailers do. Buying a 580, then, makes far more sense if dying for a CrossFire configuration, and you could even move the 580 to the top slot for best performance in single-GPU scenarios.

Our third and final interview featuring Scott Wasson, current AMD RTG team member and former EIC of Tech Report, has just gone live with information on GPU architecture. This video focuses more on a handful of reader and viewer questions, pooled largely from our Patreon backer discord, with the big item being “GPU IPC.” Patreon backer “Streetguru” submitted the question, asking why a ~1300~1400MHz RX 480 could perform comparably to an ~1800MHz GTX 1060 card. It’s a good question – it’s easy to say “architecture,” but to learn more about the why aspect, we turned to Wasson.

The main event starts at 1:04, with some follow-up questions scattered throughout Wasson’s explanation. We talk about pipeline stage length and its impact on performance, wider versus narrower machines with frequencies that match, and voltage “spent” on each stage.

We’ll leave this content piece primarily to video, as Wasson does a good job to convey the information quickly.

AMD’s Polaris refresh primarily features a BIOS overhaul, which assists in power management during idle or low-load workloads, but also ships with natively higher clocks and additional overvoltage headroom. Technically, an RX 400-series card could be flashed to its 500-series counterpart, though we haven’t begun investigation into that just yet. The reasoning, though, is because the change between the two series is so small; this is not meant to be an upgrade for existing 400-series users, but an option for buyers in the market for a completely new system.

We’ve already reviewed the RX 580 line by opening up with our MSI RX 580 Gaming X review, a $245 card that competes closely with the EVGA GTX 1060 SSC ($250) alternative from nVidia. Performance was on-point to provide back-and-forth trades depending on games, with power draw boosted over the 400 series when under load, or lowered when idle. This review of the Gigabyte RX 570 4GB Aorus card benchmarks performance versus the RX 470, 480, 580, and GTX 1050 Ti and 1060 cards. We're looking at power consumption, thermals, and FPS.

There’s no new architecture to speak of here. Our RX 480 initial review from last year covers all relevant aspects of architecture for the RX 500 series; if you’re behind on Polaris (or it’s been a while) and need a refresher on what’s happening at a silicon level, check our initial RX 480 review.

AMD’s got a new strategy: Don’t give anyone time to blink between product launches. The company’s been firing off round after round of products for the past month, starting with Ryzen 7, then Ryzen 5, and now Polaris Refresh. The product cannon will eventually be reloaded with Vega, but that’s not for today.

The RX 500 series officially arrives to market today, primarily carried in on the backs of the RX 580 and RX 570 Polaris 10 GPUs. From an architectural perspective, there’s nothing new – if you know Polaris and the RX 400 series, you know the RX 500 series. This is not an exciting, bombastic launch that requires delving into some unexplored arch; in fact, our original RX 480 review heavily detailed Polaris architecture, and that’s all relevant information to today’s RX 580 launch. If you’re not up to speed on Polaris, our review from last year is a good place to start (though the numbers are now out of date, the information is still accurate).

Both the RX 580 and RX 570 will be available as of this article’s publication. The RX 580 we’re reviewing should be listed here once retailer embargo lifts, with our RX 570 model posting here. Our RX 570 review goes live tomorrow. We’re spacing them out to allow for better per-card depth, having just come off of a series of 1080 Ti reviews (Xtreme, Gaming X).

Our Gigabyte GTX 1080 Ti Aorus Xtreme ($750) review brings us to look at one of the largest video cards in the 1080 Ti family, matching it well versus the MSI 1080 Ti Gaming X. Our tests today will look at the Aorus Xtreme GPU in thermals (most heavily), noise levels, gaming performance, and overclocking, with particular interest in the efficacy of Gigabyte’s copper insert in the backplate. The Gigabyte Aorus Xtreme is a heavyweight in all departments – size being one of them – and is priced at $750, matching the MSI Gaming X directly. A major point of differentiation is the bigger focus on RGB LEDs with Gigabyte’s model, though the three-fan design is also interesting from a thermal and noise perspective. We’ll look at that more on page 3.

We’ve already posted a tear-down of this card (and friend of the site ‘Buildzoid’ has posted his PCB analysis), but we’ll recap some of the PCB and cooler basics on this first page. The card uses a 3-fan cooler (with smaller fans than the Gaming X-type cards, but more of them) and large aluminum heatsink, ultimately taking up nearly 3 PCI-e slots. It’s the same GPU and memory underneath as all other GTX 1080 Ti cards, with differences primarily in the cooling and power management departments. Clock, of course, does have some pre-OC applied to help boost over the reference model. Gigabyte is shipping the Xtreme variant of the 1080 Ti at 1632/1746MHz (OC mode) or 1607/1721 (gaming mode), toggleable through software if not manually overclocking.

Page 1 of 17

  VigLink badge