This episode of Ask GN headlines with answering the most common question we’ve seen in the past 24 hours: Should I buy now or wait for Volta? That’ll start us off for this episode, followed by clarification of VRM quality, a history lesson on AM4 motherboards at launch and HIS existence, and silicon death from overclocking. This episode runs about 25 minutes, with each question timestamped within the video. We also have the timestamps and questions marked below, if you’d like to see when a particular topic of interest appears.
The Volta topic, we think, is among the most interesting and common for questions right now. This seems to come around for every new architecture, and our answers are generally the same. Find out more below!
The pre-Christmas holiday sales continue in the PC hardware world, with some remnant and hanger-on Black Friday and Cyber Monday deals sticking around. Right now, the Cooler Master MasterCase Pro 5 is on a significant discount, right alongside the Corsair Carbide SPEC-04 case and Logitech G900 mouse.
NVidia introduced its new Titan V GPU, which the company heralds as the “world’s most powerful GPU for the PC.” The Titan V graphics card is targeted at scientific calculations and simulation, and very clearly drops any and all “GTX” or “gaming” branding.
The Titan V hosts 21.1B transistors (perspective: the 1080 Ti has 12B, P100 has 15.3B), is capable of driving 110TFLOPS of Tensor compute, and uses the Volta GPU architecture. We are uncertain of the lower level specs, and do not presently have a block diagram for the card. We have asked for both sets of data.
AMD’s partner cards have been on hold for review for a while now. We first covered the Vega 64 Strix when we received it, which was around October 8th. The PowerColor card came in before Thanksgiving in the US, and immediately exhibited similar clock reporting and frequency bugginess with older driver revisions. AMD released driver version 17.11.4, though, which solved some of those problems – theoretically, anyway. There are still known issues with clock behavior in 17.11.4, but we wanted to test whether or not the drivers would play nice with the partner cards. For right now, our policy is this: (1) We will review the cards immediately upon consumer availability or pre-order, as that is when people will need to know if they’re any good; (2) we will review the cards when either the manufacturer declares them ready, or at a time when the cards appear to be functioning properly.
This benchmark is looking at the second option: We’re testing whether the ASUS Strix Vega 64 and PowerColor Red Devil 64 are ready for benchmarking, and looking at how they match versus the reference RX Vega 64. Theoretically, the cards should have slightly higher clocks, and therefore should perform better. Now, PowerColor has set clock targets at 1632MHz across the board, but “slightly higher clocks” doesn’t just mean clock target – it also means power budget, which board partners have control over. Either one of these, particularly in combination with superior cooling, should result in higher sustained boost clocks, which would result in higher framerates or scores.
We need some clarity on this issue, it seems.
TLDR: Some AMD RX 560 graphics cards are selling with 2 CUs disabled, resulting in 896 streaming processors to the initially advertised 1024 (64 SPs per CU). Here’s the deal: That card already exists, and it’s called an RX 460; in fact, the first two lines of our initial RX 560 review explicitly states that the driving differentiator between the 460 and 560, aside from the boosted clocks, was a pre-enabled set of 2CUs. The AMD RX 460s could already be unlocked to have 16 CUs, and the RX 560 was a card that offered that stock, rather than forcing a VBIOS flash and driver signature.
The RX 560 with 2CUs disabled, then, is not a new graphics card. It is an RX 460. We keep getting requests to test the “new” RX 560 versus the “old” RX 560 with 1024 SPs. We already did: The RX 560 review contains numbers versus the RX 460, which is (literally) an RX 560 14CU card. It is a rebrand, and that’s likely an attempt to dump stock for EOY.
Recapping our previous X299 VRM thermal coverage, we found the ASUS X299 Rampage Extreme motherboard to operate against its throttle point when pushing higher overclocks (>4GHz) on the i9-7980XE CPU. The conclusion of that content was, ultimately, that ASUS wasn’t necessarily at fault, but that we must ask whether it is reasonable to assume such a board can take the 500-600W throughput of an overclocked 7980XE CPU. EVGA has now arrived on the scene with its X299 DARK motherboard, which is seemingly the first motherboard of this year to use a fully finned VRM heatsink in a non-WS board. Our EVGA X299 DARK review will initially look at temperatures and VRM throttling on the board, and ultimately look into how much the heatsink design impacts performance.
EVGA went crazy with its X299 DARK motherboard. The craziest thing they did, evidently, was add a real heatsink to it: The heatsink has actual fins, through which a heatpipe routes toward the IO and into another large aluminum block, which is decidedly less finned. The tiny fans on top of the board look a little silly, but we also found them to be unnecessary in most use cases: Just having a real heatsink gets the board far enough, it turns out, and the brilliance of the PCH fan is that it pushes air through M.2 slots and the heatsink near the IO.
EVGA’s X299 DARK motherboard uses some brilliant designs, but also stuff that’s pretty basic. A heatsink with fins, for one, is about as obvious as it gets: More surface area means more spread of heat, and also means fans can more readily dissipate that heat. The extra four phases on the motherboard further support EVGA in dissipating heat over a wider area. EVGA individually places thermal pads on each MOSFET rather than use a large strip, which is mostly just good attention to detail; theoretically, this does improve the cooling performance, but it is not necessarily measurable. Two fans sit atop the heatsink and run upwards of 10,000RPM, with a third, larger fan located over the PCH. The PCH only consumes a few watts and has no need for active cooling, but the fan is located in such a way that (A) it’s larger, and therefore quieter and more effective, and (B) it can push air down the M.2 chamber for active cooling, then force that air into the IO shroud. A second half of the VRM heatsink (connected via heatpipe to the finned sink) is hidden under the shroud, through which the airflow from the PCH fan may flow. That’s exhausted out of the IO shield. Making a 90-degree turn does mean losing about 30% pressure, and the heatsink is far away from the PCH, but it’s enough to get heat out of the hotbox that the shroud creates.
Here's an example of what clock throttling looks like when encountering VRM temperature limits, as demonstrated in our Rampage VI Extreme content:
The revolution of 200mm fans was a short-live one. Large fans are still around, but the brief, bombastic era of sticking a 200mm fan in every slot didn’t last long: The CM HAF X, NZXT Phantom 820, SilverStone Raven 02 (180mm), Throne & Thor, and 500R all have designs that have largely been replaced in the market. That replacement comes in the form of an obsession with the word “sleek” in marketing materials, which generally means flat, unvented paneling that would conflict with the poorer static pressure performance of large fans. That’s not to say 200mm fans are inherently good or bad, just that the industry has trended away from them.
That is, until the Cooler Master H500P, which runs 2x MasterFan MF200R units dead-center, fully garnished with RGB LEDs. We didn’t necessarily like the H500P in its stock configuration (but did fix it), but we know the case is popular, and it’s the best test bench for 200mm fans. There’s a good chance that purchasers of the NF-A20 are buying them for the H500P.
And that’s what we’re reviewing today. In this benchmark, we’re reviewing the Noctua NF-A20 200mm fans versus the Cooler Master MasterFan MF200Rs, which come stock with the H500P. The MF200R fans will almost certainly become available separately, at some point, but presently only ship with the H500P.
Jon Peddie Research reports that the AIB market is likely returning to normal seasonal trends, meaning the market will be flat or moderately down from Q4 2017 through Q1 2018.
In a typical year, the AIB market is flat/down in Q1, down in Q2, up in Q3, and flat/up in Q4. The most dramatic change is usually from Q2 to Q3, on average a 14.4% increase (over the past 10 years). Q3 2016 was roughly twice that average with more than 15 million AIBs shipped, 29.1% more than Q2 and a 21.5% increase year-over-year.
Cyber Monday is over, but “Black Friday” is now “Black November.” December is also Black November. Next year is Black 2018. Welcome to the future, where the deals are infinite and yet 8GB of memory still costs nearly $100.
Regardless, we’ve got a few that are worthwhile: There’s an i7-7700K for $290, an EVGA SuperNova 650 for $65, and an AMD bundle kit below.
“Chassis” is pretty loose, here. The Thermaltake Core P90 follows the Core P3 and Core P5 lines, but only insofar as being an open air, semi-exposed bench-style “case.” It’s more of a mounting board for parts, really, and presents them in a triangular layout, the board and VGA on flanking sides.
The case includes 2x 5mm tempered glass side panels (though we think it might be a decent bench platform without the glass), mounts the power supply within the central frame, and is dotted with cable routing holes on both component-hosting panels. This case remains wall-mountable, just like its P3 and P5 successors, though may be a bit unwieldy to get onto the stud mounts, if for no other reason than radiator support up to 480mm. That’s a lot of liquid to hang on the wall.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.