Hardware Guides

Our Titan Xp Hybrid mod is done, soon to be shipped back to its owner in its new condition. Liquid cooling mods in the past have served as a means to better understand where a GPU could perform given a good cooler, and are often conducted on cards with reference coolers. The Titan Xp won’t have AIB partner cooler models, and so building a Hybrid card gives us a glimpse into what could have been.

It’s also not a hard mod to do – an hour tops, maybe a bit more for those who are more hesitant – and costs $100 for the Hybrid kit. Against the $1200 purchase for the card, that’s not a tall order.

In today’s benchmarks and conclusion of the Titan Xp Hybrid mod, we’ll cover thermals and noise levels extensively, overclocking, and throw in some gaming benchmarks.

 

GN resident overclocker ‘Buildzoid’ just finished digging through the details of EVGA’s GTX 1080 Ti FTW3 ($780) video card, noting that the card is one of the most overbuilt 1080 Tis that we’ve seen yet. The FTW3 over-engineers its VRM and power delivery solution and cooling solution equally, the latter of which we detailed in our 1080 Ti FTW3 tear-down a few days ago.

Much of this is to do with the FTW VRM discussion of last year, something we closed the book on in November. Our conclusion was that the cards were operating within thermal spec, but that there were supply-side QA issues that happened to fall on EVGA. The engineering team decided to design for this by over-engineering every aspect of the VRM on the new ICX and 1080 Ti cards, something we see in today’s PCB analysis:

Our GTX 1080 Ti SC2 review was met with several comments (on YouTube, at least) asking where the FTW3 coverage was. Turns out, EVGA didn’t even have those cards until two days ago, and we had ours overnighted the same day. We’ve got initial testing under way, but wanted to share the tear-down process early to spoil some of the board. This tear-down of the EVGA GTX 1080 Ti FTW3 ($780) exposes the PCB and VRM design, fan header placement, and cooler design for the FTW3. We’re working with GN resident overclocker ‘Buildzoid’ for a full PCB + VRM analysis in the coming days, but have preliminary information at the ready.

EVGA’s 1080 Ti FTW3 is one of the most overbuilt PCBs we’ve seen in recent history. As stated in our SC2 review, the EVGA team has gone absolutely mental with thermal pad placement (following last year’s incident), and that’s carried over to the FTW3. But it’s more than just thermal pads (on literally every component, even those that have no business being cooled), it’s also the VRM design. This is a 10+2 phase card with doubling and dual FETs all across the board, using Alpha Omega Semiconductor E6930s for all the FETs. We’ll save the rest of the PCB + VRM discussion (including amperage and thermal capabilities) for Buildzoid’s deep-dive, which we highly encourage watching. That’ll go live within a few days.

We just posted our second part of the Titan Xp Hybrid mod, detailing the build-up process for adding CLCs to the Titan Xp. The process is identical to the one we detailed for the GTX 1080 Ti FE card, since the PCB is effectively equal between the two devices.

For this build, we added thermocouples to the VRAM and VRM components to try and determine if Hybrid mods help or hurt VRAM temperatures (and, with that part of testing done, we have some interesting results). Final testing and benchmarking is being run now, with plans to publish by Monday.

In the meantime, check out part 2 below:

Thanks to GamersNexus reader ‘Grant,’ we were able to obtain a loaner nVidia Titan Xp (2017) card for review and thermal analysis. Grant purchased the card for machine learning and wanted to liquid cool the GPU, which happens to be something with which we’re well-versed. In the process, we’ll be reviewing the Titan Xp from a gaming standpoint, tearing it down, analyzing the PCB & VRM, and building it back into a liquid-cooled card. All the benchmarking is already done, but we’re opening our Titan Xp content string with a tear-down of the card.

Disassembling Founders Edition nVidia graphics cards tends to be a little more tool-intensive than most other GPU tear-downs. NVidia uses 2.0mm & 2.5mm Allen keys to secure the shroud to the baseplate, and then the baseplate to the PCB; additionally, a batch of ~16x 4mm hex heads socket through the PCB and into the baseplate, each of which hosts a small Phillips screw for the backplate.

The disassembly tutorial continues after this video version:

The RX 580, as we learned in the review process, isn’t all that different from its origins in the RX 480. The primary difference is in voltage and frequency afforded to the GPU proper, with other changes manifesting in maturation of the process over the past year of manufacturing. This means most optimizations are relegated to power (when idle – not under load) and frequency headroom. Gains on the new cards are not from anything fancy – just driving more power through under load.

Still, we were curious as to whether AMD’s drivers would permit cross-RX series multi-GPU. We decided to throw an MSI RX 580 Gaming X and MSI RX 480 Gaming X into a configuration to get things close, then see what’d happen.

The short of it is that this works. There is no explicit inhibitor built in to forbid users from running CrossFire with RX 400 and RX 500 series cards, as long as you’re doing 470/570 or 480/580. The GPU is the same, and frequency will just be matched to the slowest card, for the most part.

We think this will be a common use case, too. It makes sense: If you’re a current owner of an RX 480 and have been considering CrossFire (though we didn’t necessarily recommend it in previous content), the RX 580 will make the most sense for a secondary GPU. Well, primary, really – but you get the idea. The RX 400 series cards will see EOL and cease production in short order, if not already, which means that prices will stagnate and then skyrocket. That’s just what retailers do. Buying a 580, then, makes far more sense if dying for a CrossFire configuration, and you could even move the 580 to the top slot for best performance in single-GPU scenarios.

Our third and final interview featuring Scott Wasson, current AMD RTG team member and former EIC of Tech Report, has just gone live with information on GPU architecture. This video focuses more on a handful of reader and viewer questions, pooled largely from our Patreon backer discord, with the big item being “GPU IPC.” Patreon backer “Streetguru” submitted the question, asking why a ~1300~1400MHz RX 480 could perform comparably to an ~1800MHz GTX 1060 card. It’s a good question – it’s easy to say “architecture,” but to learn more about the why aspect, we turned to Wasson.

The main event starts at 1:04, with some follow-up questions scattered throughout Wasson’s explanation. We talk about pipeline stage length and its impact on performance, wider versus narrower machines with frequencies that match, and voltage “spent” on each stage.

We’ll leave this content piece primarily to video, as Wasson does a good job to convey the information quickly.

In light of both the House and Senate voting to reverse forthcoming privacy regulations, interest in privacy measures that can be taken by the end-user are no doubt piqued. While there is no comprehensive solution to end all privacy woes—outside of, you know, stringent privacy laws—there are a few different steps that can be taken. A VPN (Virtual Private Network) is the big one, although they come with a few of their own caveats. The Tor software offers the most ways to anonymize a user’s online presence and more, although it can be involved. Smaller actions include adjusting DNS settings and using the HTTPS Everywhere extension.

Read on, as we will delve into these in a bit more detail. This guide serves as a tutorial to setting up a VPN and protecting your privacy online.

Prior to the Ryzen launch, we discovered an issue with GTA V testing that would cause high-speed CPUs of a particular variety to stutter when achieving high framerates. Our first video didn’t conclude with a root cause, but we now believe the game is running into engine constraints – present on other RAGE games – that trigger choppy behavior on those CPUs. Originally, we only saw this on the best i5s – older gen i5 CPUs were not affected, as they were not fast enough to exceed the framerate limiter in GTA V (~187FPS, or thereabouts), and so never encountered the stutters. The newest i5 CPUs, like the 7600K and 6600K, would post high framerates, but lose consistency in frametimes. As an end user, the solution would be (interestingly) to increase your graphics quality, resolution, or otherwise bring FPS to around the 120-165 mark.

Then Ryzen came out, and then Ryzen 5 came out. With R5, we encountered a few stutters in GTA V when SMT was enabled and when the CPU was operating under conditions permitting the CPU to achieve the same high framerates as Intel Core i5-7600K CPUs. To better illustrate, we can actually turn down graphics settings to a point of forcing framerates to the max on 4C/8T R5 CPUs, relinquishing some of the performance constraint, and then encounter hard stuttering. In short: A higher framerate overall would result in a much worse experience for the player, both on i5 and R5 CPUs. The 4C/8T R5 CPUs exhibited this same stutter performance (as i5 CPUs) most heavily when SMT was disabled, at which point we spit out a graph like this:

 

This content marks the beginning of our in-depth VR testing efforts, part of an ongoing test pattern that hopes to determine distinct advantages and disadvantages on today’s hardware. VR hasn’t been a high-performance content topic for us, but we believe it’s an important one for this release of Kaby Lake & Ryzen CPUs: Both brands have boasted high VR performance, “VR Ready” tags, and other marketing that hasn’t been validated – mostly because it’s hard to do so. We’re leveraging a hardware capture rig to intercept frames to the headsets, FCAT VR, and a suite of five games across the Oculus Rift & HTC Vive to benchmark the R7 1700 vs. i7-7700K. This testing includes benchmarks at stock and overclocked configurations, totaling four devices under test (DUT) across two headsets and five games. Although this is “just” 20 total tests (with multiple passes), the process takes significantly longer than testing our entire suite of GPUs. Executing 20 of these VR benchmarks, ignoring parity tests, takes several days. We could do the same count for a GPU suite and have it done in a day.

VR benchmarking is hard, as it turns out, and there are a number of imperfections in any existing test methodology for VR. We’ve got a good solution to testing that has proven reliable, but in no way do we claim that perfect. Fortunately, by combining hardware and software capture, we’re able to validate numbers for each test pass. Using multiple test passes over the past five months of working with FCAT VR, we’ve also been able to build-up a database that gives us a clear margin of error; to this end, we’ve added error bars to the bar graphs to help illustrate when results are within usual variance.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge