With no warning whatsoever, we received word tonight that nVidia's new version of the Titan X has been officially announced. The company likes to re-use names -- see: four products named "Shield" -- and has re-issued the "Titan X" badge for use on a new Pascal-powered GPU. The Titan X will be using GP102, a significantly denser chip than the GTX 1080's GP104-400 GPU.
GP102 is a 12B transistor chip with 11 TFLOPs of FP32 COMPUTE performance, 3584 CUDA cores clocked at 1.53GHz, and the card leverages 12GB of GDDR5X memory at 480GB/s memory bandwidth. We're assuming the Titan X's GDDR5X memory also operates at 10GHz, like its GTX 1080 predecessor.
Here's a thrown-together specs table. We are doing some calculations here (a ? denotes a specification that we've extracted, and one which is not confirmed). Unless nVidia is using an architecture more similar to the GP100 (detailed in great depth here), this should be fairly accurate.
Our thermal benchmarking has expanded to the point that the tests form our most comprehensive section of any review. For this content, we dig deep into endurance testing with nVidia's just-launched GeForce GTX 1060 Founders Edition card, comparing it to the MSI GTX 1060 Gaming X. The validation testing yields interesting results, particularly with regard to potential throttle points and dips in clock-rate. More on that in a bit.
Today marks the launch of the GTX 1060 ($250-$300), announced about ten days ago. The GTX 1060 fills the mid-range of the market as a 6GB solution on the 16nm FinFET process node debuted in Pascal, and that's done with GP106.
Our GTX 1060 Founders Edition & MSI 1060 Gaming X review looks at FPS (particularly vs. the 1070 and RX 480), Vulkan & Dx12 performance, thermals, noise, power, and overclocking results.
One of newest memory technologies on the market is HBM (High Bandwidth Memory), introduced on the R9 Fury X. HBM stacks 4 memory dies atop an interposer (packaged on the substrate) to get higher density modules, while also bringing down power consumption and reducing physical transaction distance. HBM is not located on the GPU die itself, but is on the GPU package – much closer than PCB-bound GDDR5/5X memory modules.
NVidia's new GP106-equipped GeForce GTX 1060 has been announced as of today, alongside partial specs, a release date, and some software. The GTX 1060 will utilize nVidia's new GP106 GPU, a Pascal rendition which cuts down on SM and CUDA core count from preceding GP104 chips (GP104-400 and GP104-200, detailed in our GTX 1080 and GTX 1070 reviews).
The GeForce GTX 1060 uses the same Pascal architecture, with the same improvements we've already discussed heavily. That includes 16nm FinFET, delta color compression advancements allowing 8:1 compression on some memory transactions, and pre-emptive compute functions that aid in asynchronous tasks. New to the GTX 1060 is GP106, which is a cut-down Pascal chip that houses 1280 CUDA cores, operating at a maximum Boost frequency of 1.7GHz. For comparative purposes, a chart with known 1060, 1070, and 1080 specs has been pasted below. The GTX 1070 has 1920 CUDA cores and the 1080 has 2560 CUDA cores.
AMD's RX 480 launch introduces the Polaris architecture to the world, arranging an alliterative architecture assortment from both GPU vendors (Pascal, if you're curious, is the other). This is AMD's answer to the largest market segment, shipping in 4GB and 8GB variants that are priced at $200 and $240, respectively.
During the RX 480 press briefing, AMD strongly defended its stance on maturing and tuning its architectures to extract the maximum possible performance prior to an architectural shift. “We don't have a billion dollars to spend on a single architecture,” said AMD SVP & Chief Architect Raja Koduri, clearly referencing nVidia's boastful Order of 10 unveil. Koduri went on to praise his team for doing an “amazing job with existing products,” but welcomed the arrival of a new 14nm FinFET process node to usurp the long-standing ubiquity of 28nm planar process.
The AMD RX 480 8GB is on the bench for review today. In this RX 480 8GB review, we benchmark framerate (FPS) & frametime performance, overclocking, thermals, clockrate vs. time endurance, fan RPMs, and noise levels.
AMD today followed-up its Radeon RX 480 Polaris announcement with the unveil of its RX 470 and RX 460 graphics cards. Quickly recapping, the RX 480 will ship with >5TFLOPS of compute performance (depending on pre-OC or other specs) and sells for ~$200 MSRP at 4GB, or more than that for 8GB – we're guessing $230 to $250 for most AIB cards. Now, with the announcement of the RX 470 and RX 460, AMD has opened up the low-end of the market with a new focus on “cool and efficient” graphics solutions. Coming out of the company which used to associate itself with volcanic islands, high-heat reference coolers (remedied with the Fiji series), and high power draw, the Polaris architecture promises a more power/thermal-conscious GPU.
After a 1-2 week break through our Asia trip, which included factory tours in China, Taiwan, and then Computex proper, we're back with another episode of Ask GN. This time, we address questions on rumors of the 1080 Ti's release window, the impact of overclocking on component lifespan, and the importance of CPUs in an era of burgeoning GPU loading.
The week's questions are listed with timestamps below the video embed. Be sure to check out last week's episode for more of this content style. Leave questions on the Ask GN YouTube page for inclusion in next week's episode!
AMD's 14nm FinFET Radeon RX480 was just announced at Computex, using the new Polaris 10 architecture. The AMD Radeon RX480 GPU uses Polaris 10 architecture to deliver >5TFLOPS of Compute for $200, at 150W TDP, and ships in SKUs of 4GB & 8GB GDDR5. We have not confirmed if the 8GB model costs more; the exact language was “RX 480 set to drive premium VR experiences into the hands of millions of consumers, priced from just $199.”
“From,” of course, means “starting at” – so it could be that the 8GB model costs more. Regardless, AMD's firmly entered the mid-range market with its 8GB RX480, landing where the R9 380X and GTX 960 4GB presently rest. (Update: We emailed and confirmed that the 4GB model is $200. The 8GB model is not yet finalized for pricing -- probably $250+).
AMD's is rumored to be skipping on the high-end market with Polaris architectures 10 & 11, likely aiming to fill that demand with Vega instead. Vega is on the roadmap for public delivery later in 2016.
The GTX 1080's epochal launch all but overshadowed its cut-down counterpart – that is, until the price was unveiled. NVidia's GTX 1070 is promised at an initial $450 price-point for the Founders Edition (explained here), or an MSRP of $380 for board partner models. The GTX 1070 replaces nVidia's GTX 970 in the vertical, but promises superior performance to previous high-end models like the 980 and 980 Ti; we'll validate those claims in our testing below, following an initial architecture overview.
The GeForce GTX 1070 ($450) uses a Pascal GP104-200 chip. The architecture is identical to the GTX 1080 and its GP104-400 GPU, but cuts-down on SM presence (and core count) to create a mid-range version of the new 16nm FinFET architecture. This new node from TSMC is nearly half the size of Maxwell's 28nm Planar process, and switches the company over to FinFET transistor architecture for reduced power leakage and overall improved performance-per-watt efficiency. The trend is symptomatic of an industry trending toward ever-smaller devices with a greater concern on the power envelope, and has been reflected in nVidia's architectures since Fermi (GTX 400 series running notoriously hot) and AMD's since Fiji (sort of – Polaris claims to make a bigger push in this direction). On the CPU side, Intel has been driving this trend for several generations now, its 10nm process making promises to further extend mobile device endurance and transistor density.
Had investigators walked into our Thermal-Lab-And-Video-Set Conglomerate, they'd have been greeted with a horror show worthy of a police report: Two video cards fully dissected – one methodically, the other brutally – with parts blazoned in escalating dismemberment across the anti-static mat.
Judging by some of the comments, you'd think we'd committed a crime by taking apart a new GTX 1080 – but that's the job. Frankly, it didn't really matter if the thing died in the process. We're here to make content and test products for points of failure and success, not to preserve them.