The Story So Far: A Failure to Communicate
My excitement gave way to genuine confusion and concern when I found that I couldn't overclock the GTX 980 Extreme as well as my reference model. The overclock had a low ceiling due to severe voltage limitations.
Breaking into BIOS with Kepler BIOS Tweaker unveils voltage limitations that are a bit frightening, to include a locked 1.212v GPU vCore maximum overvoltage. The reference GTX 980 is capable of 1.256v; that extra headroom is enough to make a world of difference in high overclocks and stability. Note that ASUS' Strix and EVGA's OC units were originally capped at 1.212v also, but neither pushes the extreme OC narrative (or extreme price) that Zotac has advertised.
So we contacted Zotac. This was recapped in the stop-gap article I wrote about this issue. It should be first stated that the company's US PR and marketing team did everything in its power to address my admonishment and insistence that the video card should be better; there's only so much the US team can do before relying upon the overseas team's developers to deliver solutions, though. And even then, there's only so much that team can do before realizing that preliminary engineering has doomed the product from the start.
I told them that BIOS needed an update. Users should be able to at least match the voltage of reference, if not exceed it, I'd insisted. I further argued that the total % TDP of the card (+10% over base TDP) should be increased to allow for additional power as the voltage is increased. This, of course, assumed that the company fixed voltage cap issues, because they'd rapidly run into TDP cap issues. The Zotac card is built to handle the extra power and should be able to cool itself, but there were deeper electrical issues at play; I hadn't realized this yet, though.
We delayed the review by two weeks to allow Zotac the proper chance to deliver updated BIOS. Several follow-up calls and discussions were had, and no matter how many times I reiterated “I really want this card to work – I do. But we need better BIOS,” we never got that.
What I received was an updated version of their already-buggy “FireStorm” software.
The software is allegedly used for overclocking. Anyone who's got a few minutes of GPU overclocking experience will tell you that Afterburner or Precision are both acceptable (even good) tools for overclocking. For the most part, the hardware is software-agnostic. Finer control can be granted in some instances by software, but at the end of the day, we're primarily tweaking voltage and clocks – something that free software does very well.
Up until this point, no software at all would allow a reasonable overvolt above or matching reference. Similarly, FireStorm had failed on several occasions to save profiles (making OC testing a tremendous inconvenience), had misreported clocks, failed to report watt consumption, failed to control fan speed, and even failed so catastrophically as to require a hard reboot. The tool would regularly report successful OCs to insane clocks – like 2GHz (over the ~1393MHz boost CLK base) – even though GPU-z reported stock output.
I couldn't trust the tool.
Still, they sent me an updated version to resolve this voltage issue, so I humored everyone involved.
I immediately noticed something: The updated version of FireStorm allowed me to overvolt the GPU to 1.26v. Jump back to a few paragraphs ago, and you might remember me stating that it was initially locked to 1.212v (1212mV). A software update should not allow extra overhead on voltage; you'd think that if the software were the issue, Precision or Afterburner (we tried both) would have bypassed any FireStorm bugs causing a limitation. After some questioning and thorn-like perseverance, it was determined that OC+ is interacting with FireStorm in a way that resembles DRM (like the iPod requiring iTunes). Striking out against all that overclockers – hackers by nature – stand for, Zotac's components will not overvolt beyond 1.212v when using other software. That'd be fine if FireStorm worked.
And I gave a preview above as to why, but there's more. FireStorm not only misreports clocks with great unpredictability, it lies regularly about voltage and doesn't even apply the mV OV set by the user. See the below screenshot:
We don't even have voltage readings part of the time, and when we do have them, overvolts do nothing. Literally nothing. It moves a bar on the screen. This is all shown in my video on page 1. Sometimes FireStorm actually does report that voltage increases, but it's impossible to validate that through third-party solutions, according to Zotac. We just have to trust FireStorm.
Fact-checking with GPU-z only further underlines all of these concerns. In overclocking, as you'll find out below, GPU-z never reported higher than 1.20v even when the FireStorm software exceeded this number (remember: This is the only software we can use to meet 1.26v). We even had a special internal-only version of the tool that allowed up to 1.6v, but that still didn't work. Zotac told us in an email that GPU-z could not be used to accurately measure the voltage output and that we'd have to rely on FireStorm. This lack of GPU-z support is because GPU-z reads from vBIOS, which is locked to 1.212v.
I'd be OK with this revelation if FireStorm lent any confidence in its reporting.
The last two weeks have had enough back-and-forth that trust with the software development team is growing thin, but let's assume it is true: How, as a third-party reviewer, am I supposed to reliably gather reporting metrics that are accurate and trusted without using a third-party utility to check Zotac's work? I wouldn't be so concerned with this if FireStorm actually applied the voltage I asked it to. Or if FireStorm could apply a new voltage after being reset without requiring a software restart. Or if FireStorm would show me voltage at all without requiring a system reboot half the time.
But that's not how it is. FireStorm does not work. It's broken. If Zotac hadn't constrained their hardware with FireStorm to force usage, I could work around it – but alas, we've got to work with what we have given the DRM-like nature of the product.
That still doesn't answer a key question: Why is the voltage limited to 1.212v in vBIOS? Why isn't it at least reference of 1.256v?
We were told by Zotac on several occasions that nVidia imposes a voltage limit on board partners that restricted the 980 Extreme to the voltages we experienced. When I pointed to competing products – Gigabyte's cheaper G1 included – and the reference device, but still received the same answer, I decided to call nVidia. After some fact checking of my own between numerous sources, nVidia, and a personal tear-down of the card, it rapidly became clear that Zotac is using a non-reference VRM and PWM solution. Visual inspection gives this away, though the point is affirmed by all of the voltage testing we performed; marketing gimmickry and OC+ made discovery of deeper issues difficult.
Based strictly on my own tear-down and voltage testing of the card, it appears that the VRM is weaker than the reference solution. It is our pure speculation that Zotac cut corners to either rush the device to market (ahead of spec) or lower costs of production, potentially complicated by the custom PCB built for the Extreme (which sees use in other models, too).
When pressed on the low voltage, Zotac's PM team told us that they were trying to limit RMAs (warranty fulfillment) by locking voltage; then they told us that their software would unlock the voltage with OC Plus, which immediately invalidated the previous statement. Think about that cognitive dissonance: We're limiting warranties by restricting voltage unless you use our software that you're forced and encouraged to use, in which case you can overvolt it higher.
Smells like a last-minute defense against mounting pressure.
The PMs continued to blow smoke, stating that their video card holds some of the highest overclocks in the region. When pressed for proof – settings that I could replicate – I was given a cropped screenshot showing only the voltage and memory clock, nothing else. No core clock, no % TDP, no proof that this was even applied (one can screenshot any setting before application) or that a GTX 980 Extreme was used.
I had difficulty determining what was going on behind the scenes: Either Zotac has grossly incompetent engineers and product managers, was lying to me, is deeply embarrassed, jumped the gun and is now paying for it, or a mix of all these options.
I'm not sure I'd want to buy a video card from a company that acts like this when faced with criticism from a reviewer. Let me be clear: When we find an issue with a product we're testing, we always raise concern with the manufacturer first to determine if it can be resolved. A BIOS update should have been resolvable, but after gaslighting me for 1-2 weeks, it became clear that Zotac either lacked the interest or the ability to do so – or there are deep electrical issues with the 980 Extreme.
The sad thing is that we haven't even gotten to the other concerns yet, like the tremendous size of the GTX 980 Extreme, which effectively consumes 2.5 expansion slots internally. This makes SLI a bit tight and eliminates other expansion options, depending on how close your PCI-e slots are on the board.
It certainly doesn't overclock, though, and that's really all we need to know. Let's dig into exactly why this thing is worse than reference on the next page, where benchmarking is performed.