The GTX 980's placement in notebooks heralded the now-present era of desktop GPUs in laptops, but was still sort of a trial of the tech. NVidia and AMD have both introduced their Pascal and Polaris architectures in full, uncut versions to notebooks this generation, with performance generally within about 10% of an equivalent desktop build. Despite the desktop-level power, battery life should also be improved resultant of an overall reduction in power consumption by the GPU and the CPU alike. And almost every other component, for that matter – like DDR4, which requires lower voltage and draws less power than DDR3.

Today, we're looking at the MSI GE62VR 6RF Apache Pro laptop with GTX 1060 & i7-6700HQ, priced at $1600. The benchmarks follow our previous notebook 1070 vs. 1080 tests, but with proper depth and hands-on. Note also that we already wrote about the GE62VR's bloatware problem.

In this review of the MSI GE62VR 6RF Apache Pro ($1600), we'll be testing FPS on the GTX 1060, temperatures, noise levels, and build quality.

MSI and system integrator CyberPower are selling the new GT83VR Titan SLI notebook, which sells with K-SKU Intel CPUs and dual GTX 1070 or GTX 1080 GPUs. The move away from M-suffixed cards means that these GPUs are effectively identical to their desktop counterparts, with the exception of the GTX 1070's core increase and clock reduction.

That difference, just to quickly clear it away, results in 2048 CUDA cores on the notebook 1070 (vs. 1920 on the desktop) and a baseline clock-rate of 1645MHz on the notebook (1683MHz on the desktop). Despite talk about the 1060, 1070, and 1080 model notebooks, we haven't yet gotten into the SLI models for this generation.

The GTX 1060 3GB ($200) card's existence is curious. The card was initially rumored to exist prior to the 1060 6GB's official announcement, and was quickly debunked as mythological. Exactly one month later, nVidia did announce a 3GB GTX 1060 variant – but with one fewer SM, reducing the core count by 10%. That drops the GTX 1060 from 1280 CUDA cores to 1152 CUDA cores (128 cores per SM), alongside 8 fewer TMUs. Of course, there's also the memory reduction from 6GB to 3GB.

The rest of the specs, however, remain the same. The clock-rate has the same baseline 1708MHz boost target, the memory speed remains 8Gbps effective, and the GPU itself is still a declared GP106-400 chip (rev A1, for our sample). That makes this most the way toward a GTX 1060 as initially announced, aside from the disabled SM and halved VRAM. Still, nVidia's marketing language declared a 5% performance loss from the 6GB card (despite a 10% reduction in cores), and so we decided to put those claims to the test.

In this benchmark, we'll be reviewing the EVGA GTX 1060 3GB vs. GTX 1060 6GB performance in a clock-for-clock test, with 100% of the focus on FPS. The goal here is not to look at the potential for marginally changed thermals (which hinges more on AIB cooler than anything) or potentially decreased power, but to instead look strictly at the impact on FPS from the GTX 1060 3GB card's changes. In this regard, we're very much answering the “is a 1060 6GB worth it?” question, just in a less SEF fashion. The GTX 1060s will be clocked the same, within normal GPU Boost 3.0 variance, and will only be differentiated in the SM & VRAM count.

For those curious, we previously took this magnifying glass to the RX 480 8GB & 4GB cards, where we pitted the two against one another in a versus. In that scenario, AMD also reduced the memory clock of the 4GB models, but the rest remained the same.

The review is forthcoming – within a few hours – but we decided to tear-down EVGA's GTX 1080 FTW Hybrid ahead of the final review. The card is more advanced in its PCB and cooling solution than what we saw in the Corsair Hydro GFX / MSI Sea Hawk X tear-down, primarily because EVGA is deploying a Gigabyte-like coldplate that conducts thermals from the VRAM and to the CLC coldplate. It's an interesting fusion of cooling solutions, and one which makes GPU temperatures look higher than seems reasonable on the surface – prompting the tear-down – but is actually cooling multiple devices.

Anyway, here's a video of the tear-down process – photos to follow.

No reference card has impressed us this generation, insofar as usage by the enthusiast market. Primary complaints have consisted of thermal limitations or excessive heat generation, despite reasonable use cases with SIs and mITX form factor deployments. For our core audience, though, it's made more sense to recommend AIB partner models for superior cooling, pre-overclocks, and (normally) lower prices.

But that's not always the case – sometimes, as with today's review unit, the price climbs. This new child of Corsair and MSI carries on the Hydro GFX and Seahawk branding, respectively, and is posted at ~$750. The card is the construct of a partnership between the two companies, with MSI providing access to the GP104-400 chip and a reference board (FE PCB), and Corsair providing an H55 CLC and SP120L radiator fan. The companies sell their cards separately, but are selling the same product; MSI calls this the “Seahawk GTX 1080 ($750),” and Corsair sells only on its webstore as the “Hydro GFX GTX 1080.” The combination is one we first looked at with the Seahawk 980 Ti vs. the EVGA 980 Ti Hybrid, and we'll be making the EVGA FTW Hybrid vs. Hydro GFX 1080 comparison in the next few days.

For now, we're reviewing the Corsair Hydro GFX GTX 1080 liquid-cooled GPU for thermal performance, endurance throttles, noise, power, FPS, and overclocking potential. We will primarily refer to the card as the Hydro GFX, as Corsair is the company responsible for providing the loaner review sample. Know that it is the same as the Seahawk.

We've just finished testing the result of this build, and the results are equal parts exciting and intriguing – but that will be published following this content. We're still crunching data and making charts for part 3.

In the meantime, the tear-down of our reader's loaner Titan X (Pascal) GPU has resulted in relatively easy assembly with an EVGA Hybrid kit liquid cooler. The mounting points on the Titan XP are identical to a GTX 1080, and components can be used between the two cards almost completely interchangeably. The hole distance on the Titan XP is the same as the GTX 1080, which is the same as the 980 Ti, 1070, and very similar to the GTX 1060 (which has a different base plate).

Here's the new video of the Titan X build, if you missed it:

With thanks to GamersNexus viewer Sam, we were able to procure a loaner Titan X (Pascal) graphics card whilst visiting London. We were there for nVidia's GTX 10 Series laptop unveil anyway, and without being sampled the Titan X, this proved the best chance at getting hands-on.

The Titan X (Pascal) GP102-400 GPU runs warmer than the GTX 1080's GP104-400 chip, as we'll show in benchmarks in Part 3 of this series, but still shows promise as a fairly capable overclocker. We've already managed +175MHz offsets from core with the stock cooler, but want to improve clock-rate stability over time and versus thermals. The easiest way to do that – as we've found with the 1080 Hybrid, 1060 Hybrid, and 480 Hybrid – is to put the card under water cooling (or propylene glycol, anyway).

In this first part of our DIY Titan XP “Hybrid” build log, we'll tear-down the card to its bones and look at the PCB, cooling solution, and potential problem points for the liquid cooling build.

Here's the video, though separate notes and photos are below:

Pascal has mobilized, officially launching in notebooks today. The GTX 1080, 1070, and 1060 full desktop GPUs will be available in Pascal notebooks, similar to the GTX 980 non-M launch from last year. Those earlier 980 laptops were a bit of an experiment, from what nVidia's laptop team told us, and led to wider implementation of the line-up for Pascal.

We had an opportunity to perform preliminary benchmarks using some of our usual test suite while at the London unveil event, including frametime analysis (1% / 0.1% lows) with Shadow of Mordor. Testing was conducted using the exact same settings as we use in our own benchmarks, and we used some of our own software to validate that results were clean.

Before getting to preliminary GTX 1080 & GTX 1070 notebook FPS benchmarks on the Clevo P775 and MSI GT62, we'll run through laptop overclocking, specification differences in the GTX 1070, and 120Hz display updates. Note also that we've got at least three notebooks on the way for testing, and will be publishing reviews through the month. Our own initial benchmarks are further down.

The GTX 1060 Hybrid series has come to a close. This project encountered an unexpected speed bump, whereupon we inserted a copper shim (changing the stack to silicon > TIM > shim > TIM > coldplate) to bridge contact between the CLC and GPU. This obviously sacrifices some efficiency, as we're inserting two layers of ~6W/mK TIM between ~400W/mK copper, but it's still better than air cooling with a finned heatsink.

Our previous Hybrid projects (see: 1080, RX 480) axed the baseplate, thereby losing some VRAM and VRM cooling potential. For this project, we filed down the edges of the GPU socket to accommodate the protruding EVGA coldplate. This allowed us to keep the baseplate, granting better conduction to the VRAM and VRM. The blower fan is also still operating, but by removing the cover from the shroud (“window”), we're losing some pressure and air before it reaches the VRM. After speaking to a few AIB partners, we determined that the cooling was still sufficient for our purposes. An open air bench case fan was positioned to blast air into the “window” hole, keeping things a little cooler on average.

With no warning whatsoever, we received word tonight that nVidia's new version of the Titan X has been officially announced. The company likes to re-use names -- see: four products named "Shield" -- and has re-issued the "Titan X" badge for use on a new Pascal-powered GPU. The Titan X will be using GP102, a significantly denser chip than the GTX 1080's GP104-400 GPU.

GP102 is a 12B transistor chip with 11 TFLOPs of FP32 COMPUTE performance, 3584 CUDA cores clocked at 1.53GHz, and the card leverages 12GB of GDDR5X memory at 480GB/s memory bandwidth. We're assuming the Titan X's GDDR5X memory also operates at 10GHz, like its GTX 1080 predecessor.

Here's a thrown-together specs table. We are doing some calculations here (a ? denotes a specification that we've extracted, and one which is not confirmed). Unless nVidia is using an architecture more similar to the GP100 (detailed in great depth here), this should be fairly accurate.

Page 2 of 5

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge