NVidia introduced its new Titan V GPU, which the company heralds as the “world’s most powerful GPU for the PC.” The Titan V graphics card is targeted at scientific calculations and simulation, and very clearly drops any and all “GTX” or “gaming” branding.

The Titan V hosts 21.1B transistors (perspective: the 1080 Ti has 12B, P100 has 15.3B), is capable of driving 110TFLOPS of Tensor compute, and uses the Volta GPU architecture. We are uncertain of the lower level specs, and do not presently have a block diagram for the card. We have asked for both sets of data.

AMD’s partner cards have been on hold for review for a while now. We first covered the Vega 64 Strix when we received it, which was around October 8th. The PowerColor card came in before Thanksgiving in the US, and immediately exhibited similar clock reporting and frequency bugginess with older driver revisions. AMD released driver version 17.11.4, though, which solved some of those problems – theoretically, anyway. There are still known issues with clock behavior in 17.11.4, but we wanted to test whether or not the drivers would play nice with the partner cards. For right now, our policy is this: (1) We will review the cards immediately upon consumer availability or pre-order, as that is when people will need to know if they’re any good; (2) we will review the cards when either the manufacturer declares them ready, or at a time when the cards appear to be functioning properly.

This benchmark is looking at the second option: We’re testing whether the ASUS Strix Vega 64 and PowerColor Red Devil 64 are ready for benchmarking, and looking at how they match versus the reference RX Vega 64. Theoretically, the cards should have slightly higher clocks, and therefore should perform better. Now, PowerColor has set clock targets at 1632MHz across the board, but “slightly higher clocks” doesn’t just mean clock target – it also means power budget, which board partners have control over. Either one of these, particularly in combination with superior cooling, should result in higher sustained boost clocks, which would result in higher framerates or scores.

We need some clarity on this issue, it seems.

TLDR: Some AMD RX 560 graphics cards are selling with 2 CUs disabled, resulting in 896 streaming processors to the initially advertised 1024 (64 SPs per CU). Here’s the deal: That card already exists, and it’s called an RX 460; in fact, the first two lines of our initial RX 560 review explicitly states that the driving differentiator between the 460 and 560, aside from the boosted clocks, was a pre-enabled set of 2CUs. The AMD RX 460s could already be unlocked to have 16 CUs, and the RX 560 was a card that offered that stock, rather than forcing a VBIOS flash and driver signature.

The RX 560 with 2CUs disabled, then, is not a new graphics card. It is an RX 460. We keep getting requests to test the “new” RX 560 versus the “old” RX 560 with 1024 SPs. We already did: The RX 560 review contains numbers versus the RX 460, which is (literally) an RX 560 14CU card. It is a rebrand, and that’s likely an attempt to dump stock for EOY.

Jon Peddie Research reports that the AIB market is likely returning to normal seasonal trends, meaning the market will be flat or moderately down from Q4 2017 through Q1 2018.

In a typical year, the AIB market is flat/down in Q1, down in Q2, up in Q3, and flat/up in Q4. The most dramatic change is usually from Q2 to Q3, on average a 14.4% increase (over the past 10 years). Q3 2016 was roughly twice that average with more than 15 million AIBs shipped, 29.1% more than Q2 and a 21.5% increase year-over-year.

After a year of non-stop GPU and CPU launches, a GPU round-up is much needed to recap all the data for each price-point. We’ll be looking at strict head-to-head comparisons for each price category, including cards priced at $100-$140, $180-$250, $400-$500, and then the Ti in its own category, of course. As noted in the video, a graphics card round-up is particularly difficult this year: Chaos in the market has thrown-off easy price comparisons, making it difficult to determine the best choice between cards. Historically, we’ve been able to rely on MSRP to get a price (+/-$20, generally) for comparison between both AMD and nVidia; the partners hadn’t strayed too far from that recommendation, nor the retailers, until the joint mining & gaming booms of this year. Fortunately, much of that pandemonium has slowed down, and cards are slowly returning to prices where they sat about 6-8 months ago.

Another point of difficulty, as always, is that price-matched video cards will often outperform one another in different types of workloads. A good example would be Vega vs. Pascal architecture: Generally speaking – and part of this is drivers – Pascal ends up favored in DirectX 11 games, while Vega ends up favored in asynchronous compute workload games (DOOM with Vulkan, Sniper with Dx12). That’s not necessarily always going to be true, but for the heavyweight Vulkan/Dx12 titles, it seems to be. You’ll have to exercise some thought and consider the advantages of each architecture, then look at the types of games you expect to be playing. Another fortunate note is that, even if you choose “wrong” (you anticipated Vulkan adoption, but got Dx11), a lot of the cards are still within a couple percentage points of their direct-price competition. It’s hard to go too wrong, short of buying bad partner cooler designs, but that’s another story.

Almost as painfully as for our DDR4 RAM sales article, we trudged through video card sales and “sales” alike in attempt to find gold in a strapped market. Video card sales weren’t as exciting as previous years, with some settling down to simply MSRP – in other words, half off – and others seeing $10-$20 drops. We found a couple of good ones, nonetheless, including GTX 1080, RX 570, RX 560, and GTX 1060 sales. Find those below.

Having gone over the best CPUs, cases, some motherboards, and soon coolers, we’re now looking at the best GTX 1080 Tis of the year. Contrary to popular belief, the model of cooler does actually matter for video cards. We’ll be going through thermal and noise data for a few of the 1080 Tis we’ve tested this year, including MOSFET, VRAM, and GPU temperatures, noise-normalized performance at 40dBA, and the PCB and VRM quality. As always with these guides, you can find links to all products discussed in the description below.

Rounding-up the GTX 1080 Tis means that we’re primarily going to be focused on cooler and PCB build quality: Noise, noise-normalized thermals, thermals, and VRM design are the forefront of competition among same-GPU parts. Ultimately, as far as gaming and overclocking performance, much of that is going to be dictated by silicon-level quality variance, and that’s nearly random. For that reason, we must differentiate board partner GPUs with thermals, noise, and potential for low-thermal overclocking (quality VRMs).

Today, we’re rounding-up the best GTX 1080 Ti graphics cards that we’ve reviewed this year, including categories of Best Overall, Best for Modding, Best Value, Best Technology, and Best PCB. Gaming performance is functionally the same on all of them, as silicon variance is the larger dictator of performance, with thermals being the next governor of performance; after all, a Pascal GPU under 60C is a higher-clocked, happier Pascal GPU, and that’ll lead framerate more than advertised clocks will.

NVIDIA’s Battlefront II Game Ready driver version 388.31 shipped this week in preparation for the game’s worldwide launch. In possibly more positive news for the vast number of redditors enraged by EA’s defense of grinding, the driver is also updated for Injustice 2 compatibility and boasts double-digit % performance increases in Destiny 2 at higher resolutions.

Battlefront 2 is the headliner for this driver release, but this chart is about all NVIDIA has to say on the subject for now:

This week’s hardware news recap primarily focuses on Intel’s Minix implementation, alongside creator Andrew Tanenbaum’s thoughts on the unknown adoption of the OS, along with some new information on the AMD + Intel multi-chip module (MCM) that’s coming to market. Supporting news items for the week include some GN-style commentary of a new “gaming” chair with case fans in it, updates on nVidia quarterly earnings, Corsair’s new “fastest” memory, and EK’s 560mm radiators.

Find the show notes after the embedded video.

Everyone’s been asking why the GTX 1070 Ti exists, noting that the flanking GTX 1080 and GTX 1070 cards largely invalidated its narrow price positioning. In a span of $100-$150, nVidia manages to segment three products, thus spurring the questions. We think the opposite: The 1070 Ti has plenty of reason to exist, but the 1080 is the now less-desirable of the options. Regardless of which (largely irrelevant) viewpoint you take, there is now a 1070, a 1070 Ti, and a 1080, and they’re all close enough that one doesn’t need to live. One should die – it’s just a matter of which. The 1070 doesn’t make sense to be killed – it’s too far from the GTX 1080, at 1920 vs. 2560 cores, and fills a lower-end market. The 1070 Ti is brand new, so that’s not dying today. The 1080, though, has been encroached upon by the 1070 Ti, just one SM and some Micron memory shy of being a full ten digits higher in numerical nomenclature.

For the basics, the GTX 1070 Ti is functionally a GTX 1080, just with one SM neutered. NVidia has removed a single simultaneous multiprocessor, which contains 128 CUDA cores and 12 texture map units, and has therefore dropped us down to 2432 CUDA cores total. This is in opposition to 2560 cores on the 1080 and 1920 cores on the 1070. The GTX 1070 Ti is much closer in relation to a 1080 than a 1070, and its $450-$480 average list price reinforces that, as GTX 1080s were available in that range before the mining explosion (when on sale, granted).

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge