Hardware Guides

After pointing out that Intel’s budget-option Pentium G4560 CPU somewhat invalidates the Intel i3 lineup, particularly when that lineup is flanked by i5s and R5s, the next question was how good of a GPU can be paired with the G4560. Someone buying a $70 CPU won’t likely be buying a GTX 1080 – and probably not a 1070 – but we wanted to see how far up the scale we could go before encountering a CPU bottleneck. This kind of test has all manner of variables, naturally, so we’ve done our best to constraint them; the biggest is that of the games tested. Depending on graphics settings, GPU constraints could be imposed all over the place. We decided to opt for what we thought to be a somewhat realistic test: We took the G4560, paired it with GPUs ranging from ~$115 to ~$600, and then configured graphics to high/ultra with a 1080p resolution. We then included titles that are known to CPU choke, titles known to be more GPU constrained, and titles balanced in the middle. This gives a wide berth of tested content (FPS, RTS, and popular titles) from which we can draw some conclusions.

We are using the Pentium G4560 for this test, naturally. Included in our Intel Pentium G4560 GPU bottleneck test are the following GPUs (listed in order of price):

We came away from our revisit of the once-king Sandy Bridge 2600K and 2500K CPUs impressed by the staying power of products that came out in Q1 2011, considering Intel’s unimpressive gains since that time.

At the time of Sandy Bridge’s release, AMD’s flagship CPUs were 45nm K10-based Phenom IIs, designed to compete in price/performance with the 45nm Lynnfield (Nehalem i5) quad cores. Later that year, AMD’s underwhelming Bulldozer architecture would launch and inevitably replace the Phenom line. Given that we’ve already looked at Intel’s 1Q11 offerings, we decided to revisit AMD’s Phenom II CPUs in 2017, including the Phenom II X6 1090T (Black Edition) and Phenom II X6 1055T. These benchmarks look at AMD Phenom II performance in gaming and production workloads for the modern era, including comparisons to the equal-aged Sandy Bridge CPUs, modern Ryzen 5 & 7 CPUs, and modern Intel CPUs.

Our Titan Xp Hybrid mod is done, soon to be shipped back to its owner in its new condition. Liquid cooling mods in the past have served as a means to better understand where a GPU could perform given a good cooler, and are often conducted on cards with reference coolers. The Titan Xp won’t have AIB partner cooler models, and so building a Hybrid card gives us a glimpse into what could have been.

It’s also not a hard mod to do – an hour tops, maybe a bit more for those who are more hesitant – and costs $100 for the Hybrid kit. Against the $1200 purchase for the card, that’s not a tall order.

In today’s benchmarks and conclusion of the Titan Xp Hybrid mod, we’ll cover thermals and noise levels extensively, overclocking, and throw in some gaming benchmarks.

 

GN resident overclocker ‘Buildzoid’ just finished digging through the details of EVGA’s GTX 1080 Ti FTW3 ($780) video card, noting that the card is one of the most overbuilt 1080 Tis that we’ve seen yet. The FTW3 over-engineers its VRM and power delivery solution and cooling solution equally, the latter of which we detailed in our 1080 Ti FTW3 tear-down a few days ago.

Much of this is to do with the FTW VRM discussion of last year, something we closed the book on in November. Our conclusion was that the cards were operating within thermal spec, but that there were supply-side QA issues that happened to fall on EVGA. The engineering team decided to design for this by over-engineering every aspect of the VRM on the new ICX and 1080 Ti cards, something we see in today’s PCB analysis:

Our GTX 1080 Ti SC2 review was met with several comments (on YouTube, at least) asking where the FTW3 coverage was. Turns out, EVGA didn’t even have those cards until two days ago, and we had ours overnighted the same day. We’ve got initial testing under way, but wanted to share the tear-down process early to spoil some of the board. This tear-down of the EVGA GTX 1080 Ti FTW3 ($780) exposes the PCB and VRM design, fan header placement, and cooler design for the FTW3. We’re working with GN resident overclocker ‘Buildzoid’ for a full PCB + VRM analysis in the coming days, but have preliminary information at the ready.

EVGA’s 1080 Ti FTW3 is one of the most overbuilt PCBs we’ve seen in recent history. As stated in our SC2 review, the EVGA team has gone absolutely mental with thermal pad placement (following last year’s incident), and that’s carried over to the FTW3. But it’s more than just thermal pads (on literally every component, even those that have no business being cooled), it’s also the VRM design. This is a 10+2 phase card with doubling and dual FETs all across the board, using Alpha Omega Semiconductor E6930s for all the FETs. We’ll save the rest of the PCB + VRM discussion (including amperage and thermal capabilities) for Buildzoid’s deep-dive, which we highly encourage watching. That’ll go live within a few days.

We just posted our second part of the Titan Xp Hybrid mod, detailing the build-up process for adding CLCs to the Titan Xp. The process is identical to the one we detailed for the GTX 1080 Ti FE card, since the PCB is effectively equal between the two devices.

For this build, we added thermocouples to the VRAM and VRM components to try and determine if Hybrid mods help or hurt VRAM temperatures (and, with that part of testing done, we have some interesting results). Final testing and benchmarking is being run now, with plans to publish by Monday.

In the meantime, check out part 2 below:

Thanks to GamersNexus reader ‘Grant,’ we were able to obtain a loaner nVidia Titan Xp (2017) card for review and thermal analysis. Grant purchased the card for machine learning and wanted to liquid cool the GPU, which happens to be something with which we’re well-versed. In the process, we’ll be reviewing the Titan Xp from a gaming standpoint, tearing it down, analyzing the PCB & VRM, and building it back into a liquid-cooled card. All the benchmarking is already done, but we’re opening our Titan Xp content string with a tear-down of the card.

Disassembling Founders Edition nVidia graphics cards tends to be a little more tool-intensive than most other GPU tear-downs. NVidia uses 2.0mm & 2.5mm Allen keys to secure the shroud to the baseplate, and then the baseplate to the PCB; additionally, a batch of ~16x 4mm hex heads socket through the PCB and into the baseplate, each of which hosts a small Phillips screw for the backplate.

The disassembly tutorial continues after this video version:

The RX 580, as we learned in the review process, isn’t all that different from its origins in the RX 480. The primary difference is in voltage and frequency afforded to the GPU proper, with other changes manifesting in maturation of the process over the past year of manufacturing. This means most optimizations are relegated to power (when idle – not under load) and frequency headroom. Gains on the new cards are not from anything fancy – just driving more power through under load.

Still, we were curious as to whether AMD’s drivers would permit cross-RX series multi-GPU. We decided to throw an MSI RX 580 Gaming X and MSI RX 480 Gaming X into a configuration to get things close, then see what’d happen.

The short of it is that this works. There is no explicit inhibitor built in to forbid users from running CrossFire with RX 400 and RX 500 series cards, as long as you’re doing 470/570 or 480/580. The GPU is the same, and frequency will just be matched to the slowest card, for the most part.

We think this will be a common use case, too. It makes sense: If you’re a current owner of an RX 480 and have been considering CrossFire (though we didn’t necessarily recommend it in previous content), the RX 580 will make the most sense for a secondary GPU. Well, primary, really – but you get the idea. The RX 400 series cards will see EOL and cease production in short order, if not already, which means that prices will stagnate and then skyrocket. That’s just what retailers do. Buying a 580, then, makes far more sense if dying for a CrossFire configuration, and you could even move the 580 to the top slot for best performance in single-GPU scenarios.

Our third and final interview featuring Scott Wasson, current AMD RTG team member and former EIC of Tech Report, has just gone live with information on GPU architecture. This video focuses more on a handful of reader and viewer questions, pooled largely from our Patreon backer discord, with the big item being “GPU IPC.” Patreon backer “Streetguru” submitted the question, asking why a ~1300~1400MHz RX 480 could perform comparably to an ~1800MHz GTX 1060 card. It’s a good question – it’s easy to say “architecture,” but to learn more about the why aspect, we turned to Wasson.

The main event starts at 1:04, with some follow-up questions scattered throughout Wasson’s explanation. We talk about pipeline stage length and its impact on performance, wider versus narrower machines with frequencies that match, and voltage “spent” on each stage.

We’ll leave this content piece primarily to video, as Wasson does a good job to convey the information quickly.

In light of both the House and Senate voting to reverse forthcoming privacy regulations, interest in privacy measures that can be taken by the end-user are no doubt piqued. While there is no comprehensive solution to end all privacy woes—outside of, you know, stringent privacy laws—there are a few different steps that can be taken. A VPN (Virtual Private Network) is the big one, although they come with a few of their own caveats. The Tor software offers the most ways to anonymize a user’s online presence and more, although it can be involved. Smaller actions include adjusting DNS settings and using the HTTPS Everywhere extension.

Read on, as we will delve into these in a bit more detail. This guide serves as a tutorial to setting up a VPN and protecting your privacy online.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge