NVidia’s Titan Xp 2017 model video card was announced without any pre-briefing for us, marking it the second recent Titan X model card that took us by surprise on launch day. The Titan Xp, as it turns out, isn’t necessarily targeted at gaming – though it does still bear the GeForce GTX mark. NVidia’s Titan Xp followed the previous Titan X (that we called “Titan XP” to reduce confusion from the Titan X – Maxwell before that), and knocks the Titan X 2016 out of its $1200 price bracket.

The Titan Xp 2017 now firmly socketed into the $1200 category, we’ve got a gap between the GTX 1080 Ti at $700 MSRP ($750 common price) of $450-$500 to the TiXp. Even with that big of a gap, though, diminishing returns in gaming or consumer workloads are to be expected. Today, we’re benchmarking and reviewing the nVidia Titan Xp for gaming specifically, with additional thermal, power, and noise tests included. This card may be better deployed for neural net and deep learning applications, but that won’t stop enthusiasts from buying it simply to have “the best.” For them, we’d like to have some benchmarks online.

Razer is pulling the curtains on a pair of high-end gaming mice: the wireless Razer Lancehead and the wired Razer Lancehead Tournament Edition. Razer touts the new mice as being “tournament-grade” in terms of accuracy, performance, and reliability. The two variants of the Razer Lancehead share many features: the sensor and Razer’s proprietary “Adaptive Frequency Technology” are the chief modifiers.

Razer Lancehead

The wireless Razer Lancehead—much like the refreshed Diamondback and high-end Mamba series—uses a 5G laser sensor with up to 50g acceleration and 16,000 DPI/210 inches per second tracking. The refreshed Diamondback and Mamba/Mamba TE all used a Philips Twin Eye sensor. It is unclear if that is the case with the Razer Lancehead, but given the specs, it’s plausible.

EVGA’s GTX 1080 Ti SC2 ($720) card uses the same ICX cooler that we reviewed back in February, where we intensely detailed how the new solution works (including information on the negative type thermistors and accuracy validation of those sensors). To get caught-up on ICX, we’d strongly recommend reading the first page of that review, and then maybe checking the thermal analysis for A/B testing versus ACX in an identical environment. As a fun add, we’re also A/B testing the faceplate – it’s got all those holes in it, so we thought we’d close them off and see if they actually help with cooling.

The fast version is basically this: EVGA, responding to concerns about ACX last year, decided to fully reinvent its flagship cooler to better monitor and cool power components in addition to the GPU component. The company did this by introducing NTC thermistors to its PCB, used for measuring GPU backside temperature (rather useless in a vacuum, but more of a validation thing when considering last year’s backplate testing), memory temperature, and power component temperature. There are thermistors placed adjacent to 5 MOSFETs, 3 memory modules, and the GPU backside. The thermistors are not embedded in the package, but placed close enough to get an accurate reading for thermals in each potential hotspot. We previously validated these thermistors versus our own thermocouples, finding that EVGA’s readings were accurate to reality.

Although this is absolutely a unique, innovative approach to GPU cooling – no one else does it, after all – we found its usefulness to primarily be relegated to noise output. After all, a dual-fan ACX cooler was already enough to keep the GPU cool (and FETs, with the help of some thermal pads), and ICX is still a dual-fan cooler. The ICX sensors primarily add a toy for enthusiasts to play with, as it won’t improve gaming performance in any meaningful way, though those enthusiasts could benefit from fine-tuning the fan curve to reduce VRM fan speeds. This would benefit in noise levels, as the VRM fan doesn’t need to spin all that fast (FETs can take ~125C heat before they start losing efficiency in any meaningful way), and so the GPU + VRM fans can spin asynchronously to help with the noise profile. Out of box, EVGA’s fan curve is a bit aggressive, we think – but we’ll talk about that later.

Thanks to GamersNexus reader ‘Grant,’ we were able to obtain a loaner nVidia Titan Xp (2017) card for review and thermal analysis. Grant purchased the card for machine learning and wanted to liquid cool the GPU, which happens to be something with which we’re well-versed. In the process, we’ll be reviewing the Titan Xp from a gaming standpoint, tearing it down, analyzing the PCB & VRM, and building it back into a liquid-cooled card. All the benchmarking is already done, but we’re opening our Titan Xp content string with a tear-down of the card.

Disassembling Founders Edition nVidia graphics cards tends to be a little more tool-intensive than most other GPU tear-downs. NVidia uses 2.0mm & 2.5mm Allen keys to secure the shroud to the baseplate, and then the baseplate to the PCB; additionally, a batch of ~16x 4mm hex heads socket through the PCB and into the baseplate, each of which hosts a small Phillips screw for the backplate.

The disassembly tutorial continues after this video version:

AMD Radeon Pro Duo 2.0: Same Name, New GPUs

By Published April 24, 2017 at 4:56 pm

AMD’s taken a page out of nVidia’s book, apparently, and nVidia probably took that page from Apple – or any number of other companies that elect to re-use product names. The new Radeon Pro Duo uses the same name as last year’s launch, but has updated the internals.

DDR4 prices continue to increase with no sign of slowing down, but this week Newegg has a sale on a 16GB kit of DDR4 from G.SKILL for $102. Sadly, this is a good deal in the current market. Non-flash storage devices have become cheaper as more of the market adopts SATA and M.2 SSDs, including a sale going on for a Seagate 4TB mechanical drive for $100. We have also highlighted a decent deal on a 750W PSU from Thermaltake, in addition to a discount on the Ryzen 1700 CPU.

The RX 580, as we learned in the review process, isn’t all that different from its origins in the RX 480. The primary difference is in voltage and frequency afforded to the GPU proper, with other changes manifesting in maturation of the process over the past year of manufacturing. This means most optimizations are relegated to power (when idle – not under load) and frequency headroom. Gains on the new cards are not from anything fancy – just driving more power through under load.

Still, we were curious as to whether AMD’s drivers would permit cross-RX series multi-GPU. We decided to throw an MSI RX 580 Gaming X and MSI RX 480 Gaming X into a configuration to get things close, then see what’d happen.

The short of it is that this works. There is no explicit inhibitor built in to forbid users from running CrossFire with RX 400 and RX 500 series cards, as long as you’re doing 470/570 or 480/580. The GPU is the same, and frequency will just be matched to the slowest card, for the most part.

We think this will be a common use case, too. It makes sense: If you’re a current owner of an RX 480 and have been considering CrossFire (though we didn’t necessarily recommend it in previous content), the RX 580 will make the most sense for a secondary GPU. Well, primary, really – but you get the idea. The RX 400 series cards will see EOL and cease production in short order, if not already, which means that prices will stagnate and then skyrocket. That’s just what retailers do. Buying a 580, then, makes far more sense if dying for a CrossFire configuration, and you could even move the 580 to the top slot for best performance in single-GPU scenarios.

Our third and final interview featuring Scott Wasson, current AMD RTG team member and former EIC of Tech Report, has just gone live with information on GPU architecture. This video focuses more on a handful of reader and viewer questions, pooled largely from our Patreon backer discord, with the big item being “GPU IPC.” Patreon backer “Streetguru” submitted the question, asking why a ~1300~1400MHz RX 480 could perform comparably to an ~1800MHz GTX 1060 card. It’s a good question – it’s easy to say “architecture,” but to learn more about the why aspect, we turned to Wasson.

The main event starts at 1:04, with some follow-up questions scattered throughout Wasson’s explanation. We talk about pipeline stage length and its impact on performance, wider versus narrower machines with frequencies that match, and voltage “spent” on each stage.

We’ll leave this content piece primarily to video, as Wasson does a good job to convey the information quickly.

AMD’s Polaris refresh primarily features a BIOS overhaul, which assists in power management during idle or low-load workloads, but also ships with natively higher clocks and additional overvoltage headroom. Technically, an RX 400-series card could be flashed to its 500-series counterpart, though we haven’t begun investigation into that just yet. The reasoning, though, is because the change between the two series is so small; this is not meant to be an upgrade for existing 400-series users, but an option for buyers in the market for a completely new system.

We’ve already reviewed the RX 580 line by opening up with our MSI RX 580 Gaming X review, a $245 card that competes closely with the EVGA GTX 1060 SSC ($250) alternative from nVidia. Performance was on-point to provide back-and-forth trades depending on games, with power draw boosted over the 400 series when under load, or lowered when idle. This review of the Gigabyte RX 570 4GB Aorus card benchmarks performance versus the RX 470, 480, 580, and GTX 1050 Ti and 1060 cards. We're looking at power consumption, thermals, and FPS.

There’s no new architecture to speak of here. Our RX 480 initial review from last year covers all relevant aspects of architecture for the RX 500 series; if you’re behind on Polaris (or it’s been a while) and need a refresher on what’s happening at a silicon level, check our initial RX 480 review.

It’s been a few months since our last PC build--in fact, it was published well before Ryzen was released. For our first post-Ryzen build, we’ve pulled together some of the components we liked best in testing to make an affordable ultrawide gaming machine. As we did in January, we pulled parts out of inventory and actually assembled and tested this PC to back up our recommendations--we’ll try to continue doing this going forward.

This gaming PC build is priced at just over $1000 -- about $1200, depending on rebates -- and is made for UltraWide 3440x1440 gaming. Our goal is to take reasonably affordable parts and show that UltraWide 1440p gaming is feasible, even while retaining high settings, without buying the most expensive GPUs and CPUs on the market. We’re only using parts in this build that we actually have, so that partially dictates cost (yes, you might be able to do some things cheaper -- like the motherboard), but it also means that we’ve had time to build, validate, and use the system in a real environment. In these early days of Ryzen as a new uarch, that’s important. We’ve done the hard work of troubleshooting a functional build. All you’d have to do is assemble it, configure BIOS, and go.

As a note: This build is also readily capable of production workloads. CUDA acceleration on the GTX 1070 will work well for Premiere renders, and the CPU thread-count will assist in CPU acceleration (like for streaming).

Advertisement:

  VigLink badge