Hardware stub

NVIDIA GTX Titan X (Pascal) Review vs. GTX 1080, SLI 1070s

Posted on October 26, 2016

The Titan X Hybrid mod we hand-crafted for a viewer allowed the card to stretch its boost an additional ~200MHz beyond the spec. This was done for Sam, the owner who loaned us the Titan XP, and was completed back in August. We also ran benchmarks before tearing the card down, albeit on drivers from mid-August, and never did publish a review of the card.

This content revisits the Titan XP for a review from a gaming standpoint. We'd generally recommend such a device for production workloads or CUDA-accelerated render/3D work, but that doesn't stop that the card is marketed as a top-of-the-line gaming device with GeForce branding. From that perspective, we're reviewing the GTX Titan X (Pascal) for its gaming performance versus the GTX 1080, hopefully providing a better understanding of value at each price-point. The Titan X (Pascal) card is priced at $1200 from nVidia directly.

Review content will focus on thermal, FPS, and overclocking performance of the GTX Titan X (Pascal) GP102 GPU. If you're curious to learn more about the card, our previous Titan XP Hybrid coverage can be found here:

NVIDIA Titan X (Pascal) Specs

NVIDIA Pascal vs. Maxwell Specs Comparison
 Titan XGTX 1080GTX 1070GTX 1060GTX 980 TiGTX 980GTX 960
GPUGP102-? PascalGP104-400 PascalGP104-200 PascalGP106 PascalGM200 MaxwellGM204 MaxwellGM204
Transistor Count12B7.2B7.2B4.4B8B5.2B2.94B
Fab Process16nm FinFET16nm FinFET16nm FinFET16nm FinFET28nm28nm28nm
CUDA Cores3584256019201280281620481024
GPCs432642
SMs28 20151022168
TPCs28 201510---
TMUs2241601208017612864
Core Clock1417MHz1607MHz1506MHz1506MHz1000MHz1126MHz1126MHz
Boost Clock1530MHz1733MHz1683MHz1708MHz1075MHz1216MHz1178MHz
FP32 TFLOPs11TFLOPs9TFLOPs6.5TFLOPs4.4TFLOPs5.63TFLOPs5TFLOPs2.4TFLOPs
Memory TypeGDDR5XGDDR5XGDDR5GDDR5GDDR5GDDR5GDDR5
Memory Capacity12GB8GB8GB6GB6GB4GB2GB, 4GB
Memory Clock10Gbps10Gbps GDDR5X8Gbps8Gbps7Gbps GDDR57Gbps GDDR57Gbps
Memory Interface384-bit256-bit256-bit192-bit384-bit256-bit128-bit
Memory Bandwidth480GB/s320.32GB/s256GB/s192GB/s336GB/s224GB/s115GB/s
TDP250W180W150W120W250W165W120W
Power Connectors1x 8-pin
1x 6-pin
1x 8-pin1x 8-pin1x 6-pin1x 8-pin
1x 6-pin
2x 6-pin1x 6-pin
Release Date8/2/20165/27/20166/10/20167/19/20166/01/20159/18/201401/22/15
Release Price$1200Reference: $700
MSRP: $600
Reference: $450
MSRP: $380
Reference: $300
MSRP: $250
$650$550$200

NVIDIA Titan X (Pascal) GP102 Architecture

titan-xpascal-gp102-block-diagram

The Titan XP uses a GP102 GPU under Pascal architecture, the largest Pascal chip presently available on a GeForce-branded card. The only current Pascal chip that's equipped in a flashier fashion than the GP102 is GP100, used for the Tesla P100 Accelerator and not meant for gaming. We detailed GP100 a few months ago, if interested in learning more about Pascal's intricacies.

GP102 hosts a total of 6 GPCs, for 28 SMs and 3584 CUDA cores. This isn't the biggest Pascal chip out, in terms of total SMs, but it's the biggest in the GeForce line. GP102 follows GP104's architecture and splits GPCs into sets of 5SMs, as opposed to the GP100 GPC-SM alignment of 10 simultaneous multiprocessors per GPC. This is why we often remind folks that cores and SMs can't be compared cross-generation, or sometimes even intra-generation. The GP100 chip used for the Tesla P100 Accelerator is built for simulation and deep learning, which means it's got a completely different implementation of cores. GP100 makes use of FP64 cores in a way that the Titan XP does not, and so is more suitable for double-precision tasks than the FP32-focused Titan X card of this generation.

gtx-titan-x-p-3

gtx-titan-x-p-2

Also like GP104-400, the Titan XP runs 8 TMUs per SM, totaling 224 texture map units. Clock-rate natively operates at 1531MHz with the stock cooler and hits 1700MHz or higher during load. Note also that we pushed to nearly 2000MHz in our liquid content without needing to overclock – that's GPU Boost 3.0 in action. The Boost behavior has detected thermal headroom to push the clock higher, and so it increases the operating frequency until a point at which a new limiter is encountered (normally power or voltage).

Cache size also plays a big role in the Titan XP. The TiXP expands cache to 3072KB over the 2048KB on the GTX 1080. VRAM on-board is 12GB for the Titan XP, versus 8GB for the GTX 1080. Both use GDDR5X at 10Gbps native.

Titan XP and GP102 host 12 billion total transistors and, despite a focus on FP32, works to introduce a new INT8 deep-learning instruction set. We're not benchmarking that today, though.

NVIDIA Titan X (Pascal) Tear-Down, PCB, & Cooler

Below are a few photos from our tear-down content:

titan-x-hybrid-teardown-1

titan-x-hybrid-teardown-3

titan-x-hybrid-teardown-4

titan-x-hybrid-teardown-5

The disassembly process for the Titan XP is almost exactly the same as what is used to disassemble the GTX 1080. The card uses a vapor chamber cooler for its thermal solution, coupled with a blower fan and usual PWM control. An aluminum baseplate rests between the PCB and the faceplate, which uses thermal pads to conduct heat into the plate. Air from the blower fan then pushes air through fins atop the baseplate, dissipating heat and pulling it away from VRAM and the VRM. This air is eventually pushed through the vapor chamber cooler, then out the back of the card.

titan-x-hybrid-teardown-6

The PCB hosts two power connectors and a third set of solder points for an additional 8-pin header, located at the right side of the board. The core VRM uses a 7-phase power design, with the memory VRM using a 2-phase design.

Continue to page 2 for test methodology.


Test Methodology

Game Test Methodology

Note that all thermals, FPS, noise, and overclocking tests were performed before our initial tear-down, so the only difference you'll see in numbers emerges after we applied a liquid cooler to the Titan XP. This data was collected in August of 2016, since it was our only chance to work with the card.

We tested using our GPU test bench, detailed in the table below. Our thanks to supporting hardware vendors for supplying some of the test components.

AMD 16.8.1 drivers were used for the RX 470 & 460 graphics cards. 16.7.2 were used for testing GTA V & DOOM (incl. Vulkan patch) on the RX 480. Drivers 16.6.2 were used for all other devices or games. NVidia's 372.54 drivers were used for game (FPS) testing on the GTX 1080 and 1060 (and Titan XP). The 368.69 drivers were used for other devices. Game settings were manually controlled for the DUT. All games were run at presets defined in their respective charts. We disable brand-supported technologies in games, like The Witcher 3's HairWorks and HBAO. All other game settings are defined in respective game benchmarks, which we publish separately from GPU reviews. Our test courses, in the event manual testing is executed, are also uploaded within that content. This allows others to replicate our results by studying our bench courses.

Windows 10-64 build 10586 was used for testing.

Each game was tested for 30 seconds in an identical scenario, then repeated three times for parity.

Average FPS, 1% low, and 0.1% low times are measured. We do not measure maximum or minimum FPS results as we consider these numbers to be pure outliers. Instead, we take an average of the lowest 1% of results (1% low) to show real-world, noticeable dips; we then take an average of the lowest 0.1% of results for severe spikes.

GN Test Bench 2015NameCourtesy OfCost
Video CardThis is what we're testing!--
CPUIntel i7-5930K CPUiBUYPOWER  
$580
MemoryCorsair Dominator 32GB 3200MHzCorsair$210
MotherboardEVGA X99 ClassifiedGamersNexus$365
Power SupplyNZXT 1200W HALE90 V2NZXT$300
SSDHyperX Savage SSDKingston Tech.$130
CaseTop Deck Tech StationGamersNexus$250
CPU CoolerNZXT Kraken X41 CLCNZXT$110

For Dx12 and Vulkan API testing, we use built-in benchmark tools and rely upon log generation for our metrics. That data is reported at the engine level.

Video Cards Tested

Thermal Test Methodology

We strongly believe that our thermal testing methodology is among the best on this side of the tech-media industry. We've validated our testing methodology with thermal chambers and have proven near-perfect accuracy of results.

Conducting thermal tests requires careful measurement of temperatures in the surrounding environment. We control for ambient by constantly measuring temperatures with K-Type thermocouples and infrared readers. We then produce charts using a Delta T(emperature) over Ambient value. This value subtracts the thermo-logged ambient value from the measured diode temperatures, producing a delta report of thermals. AIDA64 is used for logging thermals of silicon components, including the GPU diode. We additionally log core utilization and frequencies to ensure all components are firing as expected. Voltage levels are measured in addition to fan speeds, frequencies, and thermals. GPU-Z is deployed for redundancy and validation against AIDA64.

All open bench fans are configured to their maximum speed and connected straight to the PSU. This ensures minimal variance when testing, as automatically controlled fan speeds will reduce reliability of benchmarking. The CPU fan is set to use a custom fan curve that was devised in-house after a series of testing. We use a custom-built open air bench that mounts the CPU radiator out of the way of the airflow channels influencing the GPU, so the CPU heat is dumped where it will have no measurable impact on GPU temperatures.

We use an AMPROBE multi-diode thermocouple reader to log ambient actively. This ambient measurement is used to monitor fluctuations and is subtracted from absolute GPU diode readings to produce a delta value. For these tests, we configured the thermocouple reader's logging interval to 1s, matching the logging interval of GPU-Z and AIDA64. Data is calculated using a custom, in-house spreadsheet and software solution.

Endurance tests are conducted for new architectures or devices of particular interest, like the GTX 1080R9 Fury X, or GTX 980 Ti Hybrid from EVGA. These endurance tests report temperature versus frequency (sometimes versus FPS), providing a look at how cards interact in real-world gaming scenarios over extended periods of time. Because benchmarks do not inherently burn-in a card for a reasonable play period, we use this test method as a net to isolate and discover issues of thermal throttling or frequency tolerance to temperature.

Our test starts with a two-minute idle period to gauge non-gaming performance. A script automatically triggers the beginning of a GPU-intensive benchmark running MSI Kombustor – Titan Lakes for 1080s. Because we use an in-house script, we are able to perfectly execute and align our tests between passes.

Power Testing Methodology

Power consumption is measured at the system level. You can read a full power consumption guide and watt requirements here. When reading power consumption charts, do not read them as a GPU-specific requirements – this is a system-level power draw.

Power draw is measured during a FireStrike Extreme - GFX2 run. We are currently rebuilding our power benchmark.

Note that all thermals, FPS, noise, and overclocking tests were performed before our initial tear-down, so the only difference you'll see in numbers emerges after we applied a liquid cooler to the Titan XP. We've already produced that content, but will briefly revisit here.


Titan X Pascal Temperature

We're keeping the thermal discussion short since it was already so heavily covered in our Hybrid GTX Titan XP mod that was already published. The results are interesting, though, and absolutely worth a watch or a read if you've not already seen them.

Recapping the basics, the reference GTX Titan X design hits its thermal limit at 83-84C GPU diode temperature, non-delta, and that's what creates the Boost specification of 1531MHz. The clock actually automatically boosts higher through GPU Boost 3.0 when that thermal limit is bypassed by superior cooling. Our Hybrid mod, for instance, brings us down to 19.85C delta T from 59.4C on the FE cooler, which uses just a vapor chamber and blower fan. That's a reduction of ~40C for the load temperature.

As for idle temperatures, those are a little lower than the 1080 for a few reasons: (1) the die size is 471mm^2 on the GP102 chip, whereas the GP104 chip is 314mm^2, and this extra surface area helps dissipate heat; (2) for liquid testing, we've improved our implementation by keeping the baseplate on the Titan X Hybrid, not done for the 1080 Hybrid. These results are for the stock Titan XP card. We've dropped thermals by 40C and increased clock-rate without overclocking on our Hybrid mod, showing clear headroom for the stock unit to boost higher with a better cooler.

titan-x-pascal-temps-eq

This next section is an excerpt from our Hybrid results article:

Titan X Pascal Endurance Test

59.4C delta T is – the hottest on the bench, actually – and that puts the GPU Diode value in the 84-85C range. The GTX 1080 limits itself at around 83C, as we've extensively shown, and uses normal Boost 3.0 functionality to down-clock the card along the volt-frequency curve, reducing thermals by reducing performance temporarily. Once the GPU has become satisfied with its new resting temperature, it will attempt again to increase the clock-rate. This cycle repeats ad infinitum while under load, and is a normal part of GPU functionality.

What is sub-optimal, however, is heavy clock-limiting that results in drastically reduced clock-rates. Let's look at that.

titan-x-pascal-stock-endurance

The above is an endurance chart for the stock Titan XP, before applying our liquid cooling solution.

During our endurance test, we plotted the Titan XP with its stock cooler as limiting at around 84C. Every hit to 84C caused an immediate drop in clock-rate, and the clock-rate got stuck around 1544MHz, but sometimes would spike to 1670MHz or drop as low as the 1400s. The spec calls for a 1531MHz boost, on paper, and the card mostly achieves this. The chart makes it pretty clear that our clock-rate is spiking hard, and it's a result of thermals – not power limit. We occasionally warmed to 85C or 86C, but the card mostly chokes clock-rate to keep itself at 83C. And, as we'll find in a moment, the card spec calls for well under its actual operating potential.

Here's a look at a small cross-section of raw data to show what's going on.

Time (s)Core MHzGPU Diode
3481556.583
3491556.583
350149383
351153184
3521657.584
353164584
354164584
3551632.584
356153184
    357
146883
358149383
359154484
360154483
361153183
362156984
363153184
364153183
3651518.583
366150683
367150683
3681518.583
369153183
370153183
371153183
3721556.583

You can see that the clock has a range of more than 100MHz (in this sample of data, we've got a range of 189.5MHz, with a high of 1657.5MHz and low of 1468MHz). This clock-rate swing presents itself in 0.1% and 1% low values for gamers, but for production, it'll mostly manifest as an overall loss of efficiency and slow-down in render times. Because the card is so fast already, though, it might not be apparent that the slow-down exists – at least, not until after fixing the reference design. Then it's more obvious.

Let's put that into perspective.

titan-x-pascal-hybrid-endurance

Time (s)Core MHzGPU Diode (C)
3491771.541
350175941
3511784.541
352175941
353175941
354172141
355175941
3561784.541
3571771.541
358172141
3591733.541
360175941
361172141
362175941
363172141
3641746.541
365172141
366172141
367172141
3681733.541
369172141
370172141
3711695.540
3721695.541
3731695.541

(Above table: Some raw data from the Hybrid endurance run).

So, the original chart plots us as hovering in the 1468-1657+ range with an 83-84C diode, averaging at around 1531MHz. With our Hybrid mod, we brought the Titan X Pascal up to nearly 1800MHz – and that's with absolutely no overclock at all. Again, this is running stock, which means that the card's spec sheet is under its actual potential performance, and that the cooler is “stealing” speed from the chip. On average, we're moving from 1531MHz with the stock cooler to an average of about 1784MHz with the liquid cooler. We've improved the clock-rate of the stock card by more than 200MHz just by fixing nVidia's poorly performing air cooler.

The only reason we're still seeing that spiky frequency plot is because the card is now choking on power, not thermals. We've resolved the thermal constraint and are now hitting power constraints, which can be resolved simply by increasing the power limit of the card. Of course, applying an OC will re-create the spiked performance, but fixing the cooler and increasing the power limit (with no OC) will flatten overall clock-rate.


DOOM Benchmark - Titan X Pascal vs. GTX 1080

We test DOOM with both OpenGL 4.5 and Vulkan, the latter of which is presented as an average in comparative charts. Let's start just with OpenGL results.

At 1440p with Ultra settings, the Titan XP is posting an AVG FPS of 138, coupled with nearly 100FPS 1% low and about 89FPS 0.1% lows. The Gigabyte GTX 1080 Xtreme Water Force card is next in line, at about 128FPS AVG and similar lows. It's not until we get to the GTX 1080 FE that there's a reasonable gap – created almost entirely by the clock-rate difference between all the cards – where the 1080 FE pushes 109FPS AVG.

titan-xp-doom-ogl-1440p

titan-xp-doom-vulk-1440p

For Vulkan at 1440p, the comparison shows an FPS output of 155 AVG for the Titan XP, followed by a GTX 1080 at about 128FPS. That's a performance difference of roughly 21%, or 35% for the FE variant. We need to tax the cards a little harder to show any visible degradation in performance, though.

titan-xp-doom-ogl-4k

At 4K with OpenGL only, we've only got a few cards present on the bench. The GTX Titan XP pushes an AVG FPS of 81, with lows north of 60FPS. The GTX 1080 reference card is performing at 60FPS AVG, with 51FPS 1% low and 49FPS 0.1% low metrics.

titan-xp-doom-vulk-1440p

Vulkan posts the Titan XP and GTX 1080 at roughly the same performance output, since we're becoming bound by the resolution.

Ashes of the Singularity - Titan XP at 4K/High

Ashes will soon be shown the door on our benchmarks, as the emergence of Battlefield 1 and Gears of War 4 have created more popular DirectX 12 options for benchmarking. It's still here for now, though, as a mainstay title that executes the new APIs with greater dexterity than some of its peers.

titan-xp-ashes-4k-fps

Here's a look at the FPS split between Dx12 and Dx11 at 4K High. The Titan XP gets capped in DirectX 11, as demonstrated by the proximity of the GTX 1080 FTW performance, but is granted more rope in Dx12.

titan-xp-ashes-frametimes

Here's the average frametime output at 10808p/high with each device on the bench, just as a point of reference. We're looking at AVG frametimes of ~12.3ms on the Titan XP and ~13 on the GTX 1080 cards, placing both within a 16ms refresh cycle for V-sync users on 60Hz displays. Dx11, however, puts us nearing 20ms, and so we'd see the occasional stutter with the older API.

GTA V 4K Benchmark - Titan X Pascal vs. SLI GTX 1070s, GTX 1080

We tested GTA V at 4K with Very High & Ultra settings, but threw in a 1080p test just to demonstrate a point: At 1080p, we're clearly hitting a CPU bottleneck that's limiting GPU performance. The Titan XP is operating at 136FPS AVG with 95 and 85FPS 1% and 0.1% lows, respectively, which is trailed just barely by the GTX 1080 suite. If you've got plans to buy either device, it's probably worthwhile to invest in a higher resolution screen.

Performance is only limited in this fashion because we've saturated the CPU to the point that it can no longer keep up with the GPU, and so the GPU is operating at less than 100% of its full load potential. This happens most frequently with draw calls (requests for primitives/polys), as the CPU must also do work for every new frame produced. The GPU is not the only device driving high framerates.

titan-x-pascal-gtav-1080p

That point made, let's move on to 4K testing. The GTX Titan X is outclassed by SLI GTX 1070s in this benchmark, which jointly post a 79FPS AVG against the Titan X's 73FPS AVG. The single Titan X, as is often the case with single vs. multi-GPU configurations, outputs better frametimes than the SLI cards. Even still, the performance metrics are high enough on each device to be more or less identical with regard to user perception of framerate throughput. The difference between the configurations is about 8%, for reference, with a significant price difference. That said, there's not always good (or even positive) scaling for multi-GPU setups.

titan-x-pascal-gtav-4k

The GTX 1080 AIB partner cards operate around 59FPS AVG, or a gap of approximately 23% from the GeForce Titan X. Below these, the CrossFire RX 480s rest at 58FPS AVG, with the GTX 1080 FE is at 56FPS AVG.

Mirror's Edge Catalyst Benchmark - GTX Titan X vs. GTX 1080

titan-x-pascal-mec-1440p

Moving on to Mirror's Edge: Catalyst at 1440p, stock Titan XP performance runs at approximately 111FPS AVG, with low framerates reasonably timed to the average and constantly above 60FPS. The Gigabyte GTX 1080 Xtreme Water Force ($770) card we reviewed, along with all the other GTX 1080s, rests at 95FPS AVG. The reference GTX 1080 operates at 89FPS AVG, or about 24% slower than the Titan XP reference card.

titan-x-pascal-mec-4k

Moving to 4K, the Titan X (Pascal) card is able to sustain High quality settings at 3840x2160 with an FPS of 60.3 AVG, about 4-5FPS ahead of the liquid-cooled GTX 1080 GPUs. This framerate also places the Titan XP ~5FPS ahead of the SLI GTX 1070s. The Sea Hawk and 1080 Gaming X both run at around 51FPS AVG, with the 1070 Gaming X way down at 40.67FPS AVG and 30FPS lows.

Call of Duty: Black Ops III Benchmark - GTX Titan X vs. SLI 1070s, GTX 1080

titan-xp-blops3-1440p

Following its performance optimization patch closer to the title's launch, Call of Duty: Black Ops III has remained one of the best-optimized titles on our benchmark. The Titan X Pascal card lands behind the SLI GTX 1070s on our bench, trailing by 4FPS in average FPS. This is sort of a moot point at this high of a framerate, but the Titan XP does hold tighter frametime performance (less latency with a single GPU -- communication is simplified) than the dual GTX 1070s. Either way, considering the price difference, we'd take the slight hit to frametimes in favor of a cheaper, similarly-paced solution. That's not always going to be the case, but it is with BLOPS3. Again, in games where SLI doesn't scale or doesn't scale well, you end up with effectively one GTX 1070 -- a far cry from the performance of a GTX Titan X.

As for single card performance at 1440p, the GTX 1080 cards land closer to ~143FPS, or about 40FPS slower than the GTX Titan X (a percentage change of ~26%).

titan-xp-blops3-4k

The stack remains similar at 4K, with the GTX Titan XP at ~92FPS AVG, sandwiched between the RX 480s in CrossFire (BLOPS3 does tend to favor AMD) and GTX 1070s in SLI. The GTX 1080 cards, led by the FTW Hybrid at 73FPS, tend to trail the Titan XP again by about 26%.

Metro: Last Light Benchmark - 1080p, 1440p, & 4K on Titan X Pascal

titan-xp-mll-1080p

Metro: Last Light does become a bit bound by CPU at 1080p with Very High quality and High tessellation, but we're still showing some data to better illustrate the stack-up.

Let's move to something more reasonable.

titan-xp-mll-1440p

MLL is another of the better-optimized titles for multi-GPU scaling, and thus plants the GTX 1070s in SLI a few frames ahead of the Titan XP's average FPS. As with previous tests, the Titan XP manages to output tighter frametimes -- again, expected, as we're running a simplified pipeline with only one GPU -- but is otherwise comparable to SLI 1070 performance.

The GTX 1080s follow, led by the 1080 Sea Hawk/Hydro GFX & FTW Hybrid at ~97FPS AVG. That's a performance gap of approximately 16%, a good deal smaller than what we saw with BLOPS3.

titan-xp-mll-4k

Performance tightens at 4K, with the SLI 1070s now only 1 frame ahead of the Titan XP's average. The GTX 1080 FTW Hybrid sits at ~67FPS AVG, 13FPS behind the Titan XP, creating a percentage change of 20%. The Titan XP is handling the high demand of Metro: Last Light's UHD resolution output with greater grace than the 1080 FTW Hybrid, though both remain fully "playable" and north of the 60FPS marker. Given the price disparity, that is an important point to tally.

The Division Benchmark - Titan XP vs. GTX 1080

We've axed The Division from our bench going forward, but it gets one last hurrah with the Titan XP review.

titan-xp-division-1440p

The Division operates at ~120FPS AVG on the Titan XP with 1440p/High settings, followed next by the SLI GTX 1070s at about 1FPS behind. The GTX 1080 is next, at 98FPS, followed then by the Fury X at 71FPS AVG. There's about a 20FPS performance gap between the Titan XP and GTX 1080, or ~21%.

titan-xp-division-4k

The performance distribution at 4K follows what we saw in Metro: Last Light, where the SLI 1070 cards begin to fall further behind the Titan XP on grounds of pure pixel processing power. The GTX Titan XP is now at 69FPS AVG and has 0.1% lows sustained at 41FPS, followed very distantly by 0.1% lows (perceived as stutters, in this case) of 23.7FPS on the 1070s in SLI.

NVidia's GTX 1080, as improved by the Corsair, MSI, and EVGA models on the bench, operates at around 55FPS AVG, or a difference of ~13-14FPS.


GTX Titan XP Overclocking Results

Below is our overclock stepping table with the GTX Titan XP card. With a maxed-out power target and no core offset, the average clock-rate operates at about 100MHz higher than stock with a 100% target – but we're still thermally limited. Regardless, the maximum stable overclock landed around 1911MHz average frequency, with a 175MHz offset to core and 450MHz offset to memory. The peak clock-rate was 1923.5MHz, however brief. This was with a 3500RPM fan speed, so noise levels are getting a bit more significant.

The liquid-cooled variant, for the curious, was able to sustain a maximum overclock of 2012MHz, or 1974MHz average.

Peak Core CLKCore Clock (MHz)Core Offset (MHz)Mem CLK (MHz)Mem Offset (MHz)Power Target (%)Peak vCoreFan Target (%)5m Test60m Endurance
1695.5~162001251.501000.9550PP
1746.51733.501251.501201.02565PP
1822.51809.51001251.501201.02565P-
186018351251251.501201.02565PP
18731847.51501251.501201.02566P-
1923.519111751251.501201.02566P-
1923.519111751325.53001201.02566P-
1923.5191117513775001201.02566P-
1923.519111751400.66001201.02566F
Driver Crash
-
198719111751363.54501201.02570PP
--2001363.54501201.02570F
Driver Crash
-

Hybrid mod:

Peak Core CLKCore Clock (MHz)Core Offset (MHz)Mem CLK (MHz)Mem Offset (MHz)Power Target (%)Peak vCoreFan target5m Test60m Endurance
19871923.51751363.54501201.03123 (Auto)PP
198719362001363.54501201.03123 (Auto)P-
1999.51961.52251363.54501201.03123 (Auto)P-
201219742501363.54501201.03123 (Auto)P-
2037.51999.52751363.54501201.03123 (Auto)F-
201219742501363.55001201.03123 (Auto)PP
201219742501363.56001201.03123 (Auto)PP
(Worse FPS)
201219742501363.54501201.03123 (Auto)PP

Here's a quick look at overclocking performance. We're seeing fairly substantial gains in some titles, nearing 10FPS in GTA V and Mirror's Edge Catalyst.

titan-x-oc-gtav-4k 1

titan-x-oc-mec-1440p 1

titan-x-oc-mordor-4k 1

Continue to page 6 for the conclusion.


Titan X (Pascal) Review Conclusion

Just to sort of re-iterate the Hybrid research content, nVidia is operating at spec for its clock-rate, but the spec could actually be higher with a superior cooler. Just changing to a liquid cooler increased our average FPS by about 3.5% to 5%, depending on the title – and that's with no overclock. It feels almost wasteful to use the reference cooler on the GTX Titan X, and with a limited supply of AIB partner variants, that's going to be the most common model.

The Titan XP is still priced north of $1000, for the most part, with the GTX 1080 resting closer to $700. In its absolute best performing scenarios, the Titan XP is able to outperform a GTX 1080 FE by roughly 30%, and posts best-case gains over AIB partner 1080s upwards of 25%. But we've got to keep the bigger picture in mind: A GTX 1080 is already capable of running almost every game we've tested at 4K with roughly 60FPS framerates. For most enthusiasts at the high-end, we'd wager that's enough.

gtx-titan-x-p-4

An extra $300 doesn't gain a tangible framerate improvement at this point, since we're already so high in FPS output at 1440p and in some 4K scenarios. For the most part, the GTX 1080 makes more sense as a top-of-the-line gaming card.

The Titan XP may make more sense for render and CUDA accelerated applications, once more fully support the Pascal architecture. The extra VRAM is the biggest differentiator and will stretch its legs more thoroughly in animation and CUDA-accelerated renders.

Editorial, Test Lead: Steve “Lelldorianx” Burke
Test Technician: Andie “Draguelian” Burke
Video Producer: Andrew “ColossalCake” Coleman