We've already extensively looked at the GTX 1060 3GB vs. GTX 1060 6GB buying options, we covered the RX 480 4GB vs. 8GB options, but we haven't yet tested the 3GB & 4GB SKUs head-to-head. In this content, we're using the latest drivers to specifically benchmark the GTX 1060 3GB versus the RX 480 4GB cards to determine which has the best framerate for the price.

Each of the lower VRAM spec SKUs has a few other tweaks in addition to its memory capacity reduction. The GTX 1060 3GB, for instance, also eliminates one of its SMs. In turn, that kills 128 CUDA cores and 8 TMUs, dragging the 1060 down from 1280 cores / 80 TMUs to 1152 cores / 72 TMUs on the GTX 1060 3GB model. AMD's RX 480 4GB card, meanwhile, has a lower minimum specification for memory to assist in cost management. The RX 480 4GB has a minimum memory speed of ~1750MHz (or ~7Gbps effective), whereas the RX 480 8GB model runs 2000MHz (8Gbps effective).

Tuesday, upon its institution on the Gregorian calendar, was deemed “product release day” by our long dead-and-rotted ancestors. Today marks the official announcement of the nVidia GTX 1050 and GTX 1050 Ti cards on the GP107 GPU, though additional product announcements will go live on our site by 10AM EST.

The GTX 1050 and 1050 Ti video cards are based on the GP107 GPU with Pascal architecture, sticking to the same SM layout as on previous Pascal GPUs (exception: GP100). Because this is a news announcement, we won't have products in hand for at least another day – but we can fly through the hard specs today and then advise that you return this week for our reviews.

We've still got a few content pieces left over from our recent tour of LA-based hardware manufacturers. One of those pieces, filmed with no notice and sort of on a whim, is our tear-down of an EVGA GTX 1080 Classified video card. EVGA's Jacob Freeman had one available and was game to watch a live, no-preparation tear-down of the card on camera.

This is the most meticulously built GTX 1080 we have yet torn to the bones. The card has an intensely over-built VRM with inductors and power stages of high-quality, using doublers to achieve its 14-phase power design (7x2). An additional three phases are set aside for memory, cooled in tandem with the core VRM, GPU, and VRAM by an ACX 3.0 cooler. The PCB and cooler meet through a set of screws, each anchored to an adhesive (preventing direct contact between the screw and PCB – although unnecessary, a nice touch), with the faceplate and accessories mounted via Allen-keyed screws.

It's an exceptionally easy card to disassemble. The unit is rated to draw 245W through the board (30W more than the 215W draw of the GTX 1080 Hybrid), theoretically targeted at high sustained overclocks with its master/slave power target boost. It's not news that Pascal devices seem to cap their maximum frequency all around the 2050-2100MHz range, but there are still merits to an over-built VRM. One of those is greater spread of heat over the area of the cooler, and lower efficiency loss through heat or low-quality phases. With the Classified, it's also a prime target for modification using something like the EK Predator 280 or open loop cooling. Easy disassembly and high performance match well with liquid.

We had a clerical error in our original Gears of War 4 GPU benchmark, but that's been fully rectified with this content. The error was a mix of several variables, primarily having three different folks working on the benchmarks, and working with a game that has about 40 graphics settings. We also had our custom Python script (which works perfectly) for interpreting PresentMon, a new tool to FPS capture, and that threw enough production changes into the mix that we had to unpublish the content and correct it.

All of our tests, though, were good. That's the good news. The error was in chart generation, where nVidia and AMD cards were put on the same charts using different settings, creating an unintentional misrepresentation of our data. And as a reminder, that data was valid and accurate – it just wasn't put in the right place. My apologies for that. Thankfully, we caught that early and have fixed everything.

I've been in communication with AMD and nVidia all morning, so everyone is clear on what's going on. Our 4K charts were completely accurate, but the others needed a rework. We've corrected the charts and have added several new, accurately presented tests to add some value to our original benchmark. Some of that includes, for instance, new tests that look at Ultra performance on nVidia vs AMD properly, tests that look at the 3GB vs 6GB GTX 1060, and more.e titles distributed to both PC and Xbox, generally leveraging UWP as a link.

Gears of War 4 is a DirectX 12 title. To this end, the game requires Windows 10 to play – Anniversary Edition, to be specific about what Microsoft forces users to install – and grants lower level access to the GPU via the engine. Asynchronous compute is now supported in Gears of War 4, useful for both nVidia and AMD, and dozens of graphics options make for a brilliantly complex assortment of options for PC enthusiasts. In this regard, The Coalition has done well to deliver a PC title of high flexibility, going the next step further to meticulously detail the options with CPU, GPU, and memory intensive indicators. Configure the game in an ambitious way, and it'll warn the user of a specific setting which may cause issues on the detected hardware.

That's incredible, honestly. This takes what GTA V did by adding a VRAM slider, then furthers it several steps. We cannot commend The Coalition enough for not only supporting PC players, but for doing so in a way which is so explicitly built for fine-tuning and maximizing hardware on the market.

In this benchmark of Gears of War 4, we'll test the FPS of various GPUs at Ultra and High settings (4K, 1440p, 1080p), furthering our tests by splashing in an FPS scaling chart across Low, Medium, High, and Ultra graphics. The benchmarks include the GTX 1080, 1070, 1060, RX 480, 470, and 460, and then further include last gen's GTX 980 Ti, 970, 960, and 950 with AMD's R9 Fury X, R9 390X, and R9 380X.

Buildzoid returns this week to analyze the PCB and VRM of Gigabyte's GTX 1080 Xtreme Water Force GPU, providing new insight to the card's overclocking capabilities. We showed a maximum overclock of 2151.5MHz on the Gigabyte GTX 1080 Xtreme Water Force, but the card's stable OC landed it at just 2100.5MHz. Compared to the FTW Hybrid (2151.5MHz overclock sustained) and MSI Sea Hawk 1080 (2050MHz overclock sustained), the Gigabyte Xtreme Water Force's overkill VRM & cooling land it between the two competitors.

But we talk about all of that in the review; today, we're focused on the PCB and VRM exclusively.

The card uses a 12-phase core voltage VRM with a 2-phase memory voltage VRM, relying on Fairchild Semiconductor and uPI Micro for most the other components. Learn more here:

Implementation of liquid coolers on GPUs makes far more sense than on the standard CPU. We've shown in testing that actual performance can improve as a result of a better cooling solution on a GPU, particularly when replacing weak blower fan or reference cooler configurations. With nVidia cards, Boost 3.0 dictates clock-rate based upon a few parameters, one of which is remedied with more efficient GPU cooling solutions. On the AMD side of things, our RX 480 Hybrid mod garnered some additional overclocking headroom (~50MHz), but primarily reduced noise output.

Clock-rate also stabilizes with better cooling solutions (and that includes well-designed air cooling), which helps sustain more consistent frametimes and tighten frame latency. We call these 1% and 0.1% lows, though that presentation of the data is still looking at frametimes at the 99th and 99.9th percentile.

The EVGA GTX 1080 Hybrid has thus far had the most interesting cooling solution we've torn down on an AIO cooled GPU this generation, but Gigabyte's Xtreme Waterforce card threatens to take that title. In this review, we'll benchmark the Gigabyte GTX 1080 Xtreme Water Force card vs. the EVGA 1080 FTW Hybrid and MSI/Corsair 1080 Sea Hawk. Testing is focused on thermals and noise primarily, with FPS and overclocking thrown into the mix.

A quick thanks to viewer and reader Sean for loaning us this card, since Gigabyte doesn't respond to our sample requests.

As we board planes for our impending trip to Southern California (office tours upcoming), we've just finalized the Gigabyte GTX 1080 Xtreme Water Force tear-down coverage. The Gigabyte GTX 1080 Xtreme Water Force makes use of a similar cooling philosophy as the EVGA GTX 1080 FTW Hybrid, which we recently tore-down and reviewed vs. the Corsair Hydro GFX.

Gigabyte's using a closed-loop liquid cooler to deal with the heat generation on the GP104-400 GPU, but isn't taking the “hybrid” approach that its competitors have taken. There's no VRM/VRAM blower fan for this unit; instead, the power and memory components are cooled by an additional copper and aluminum heatsink, which are bridged by a heatpipe. That copper plate (mounted atop the VRAM) transfers its heat to the coldplate of what we believe to be a Cooler Master CLC, which then sinks everything for dissipation by the 120mm radiator.

The GTX 1060 3GB ($200) card's existence is curious. The card was initially rumored to exist prior to the 1060 6GB's official announcement, and was quickly debunked as mythological. Exactly one month later, nVidia did announce a 3GB GTX 1060 variant – but with one fewer SM, reducing the core count by 10%. That drops the GTX 1060 from 1280 CUDA cores to 1152 CUDA cores (128 cores per SM), alongside 8 fewer TMUs. Of course, there's also the memory reduction from 6GB to 3GB.

The rest of the specs, however, remain the same. The clock-rate has the same baseline 1708MHz boost target, the memory speed remains 8Gbps effective, and the GPU itself is still a declared GP106-400 chip (rev A1, for our sample). That makes this most the way toward a GTX 1060 as initially announced, aside from the disabled SM and halved VRAM. Still, nVidia's marketing language declared a 5% performance loss from the 6GB card (despite a 10% reduction in cores), and so we decided to put those claims to the test.

In this benchmark, we'll be reviewing the EVGA GTX 1060 3GB vs. GTX 1060 6GB performance in a clock-for-clock test, with 100% of the focus on FPS. The goal here is not to look at the potential for marginally changed thermals (which hinges more on AIB cooler than anything) or potentially decreased power, but to instead look strictly at the impact on FPS from the GTX 1060 3GB card's changes. In this regard, we're very much answering the “is a 1060 6GB worth it?” question, just in a less SEF fashion. The GTX 1060s will be clocked the same, within normal GPU Boost 3.0 variance, and will only be differentiated in the SM & VRAM count.

For those curious, we previously took this magnifying glass to the RX 480 8GB & 4GB cards, where we pitted the two against one another in a versus. In that scenario, AMD also reduced the memory clock of the 4GB models, but the rest remained the same.

Upon return home from PAX, we quickly noticed that the pile of boxes included an MSI GTX 1080 Sea Hawk EK graphics card, which use a pre-applied GPU water block for open loop cooling. This approach is more traditional and in-depth than what we've shown with the AIO / CLC solutions for GPUs, like what the EVGA GTX 1080 FTW Hybrid uses (review here).

The Sea Hawk EK ($783) partners with, obviously, EK WB for the liquid cooling solution, and uses a full coverage block atop a custom MSI PCB for cooling. The biggest difference in such a setup is coverage of the VRAM, MOSFETs, capacitor bank, and PWM. The acrylic is channeled out for the inductors, so their heat is not directly conducted to the water block; this would increase liquid temperature unnecessarily, anyway.

We won't be fully reviewing this card. It's just not within our time budget right now, and we'd have to build up a wet bench for testing open loop components; that said, we'll soon be testing other EK parts – the Predator, mostly – so keep an eye out for that. The Sea Hawk EK was sent by MSI before confirming our review schedule, so we decided to tear it apart while we've got it and see what's underneath.

As we reported on August 4, the Class Action lawsuit against nVidia has been settled in courts. The final payout amount is pending approval (full resolution by December, in theory), but owners of the GTX 970 may now submit claims to retrieve a $30 payment per GTX 970 purchased, should those owners feel entitled to the funds.

Claims can be filed on the GTX 970 Settlement website. The claim filing deadline is November 30, 2016, with the final approval hearing scheduled for December 7, 2016. Claims must be filed before the deadline and will not be paid out until after the final approval hearing goes through.

Page 1 of 25

  VigLink badge