AMD launched its RX 580 2048 silently in China a few months ago, and in doing so damaged its brand credibility by rebranding the RX 570 as an RX 580. The point of having those two, distinct names is that they represent different products. The RX 580 2048 has 2048 FPUs (or streaming processors), which happens to be exactly what the RX 570 has. The RX 580 is also a few MHz higher in clock, which is fully attainable with an overclocked RX 570. Working with GamersNexus contacts in Taiwan, who then worked with contacts in China, we managed to obtain this China-only product launch so we could take a closer look at why, exactly, AMD thinks an RX 570 Ti deserves the name “RX 580.”

Taking an existing product with a relatively good reputation and rebuilding it as a worse product isn’t new. Don’t get us wrong: The RX 570, which is what the RX 580 2048 is, is a reasonably good card, especially with its new prices of roughly $150 (Newegg) to $180 elsewhere. That said, an RX 580 2048 is, by definition, not an RX 580. That’s lying. It is an RX 570, or maybe an RX 575, if AMD thinks that a 40MHz clock difference deserves a new SKU. AMD is pulling the same deceitful trick that NVIDIA pulled with its GT 1030 DDR4 card. It’s disgraceful, misleading, and predatory of consumers who may otherwise not understand the significance of the suffix “2048.” If they’re looking for an RX 580, they’re still finding one – except it isn’t one, and to brand the RX 580 2048 as an RX 580 is disgraceful.

We have a separate video scheduled to hit our channel with a tear-down of the card, in case you’re curious about build quality. Today, we’re using the DATALAND RX 580 2048 as our vessel for testing AMD’s new GPU. Keep in mind that, for all our scorn toward the GPU, DATALAND is somewhat unfortunately the host. DATALAND didn’t make the GPU – they just put it on the PCB and under the cooler (which is actually not bad). It also appears that DATALAND (迪兰) works alongside TUL, the parent company to PowerColor.

We paid about $180 USD for this card, which puts it around where some RX 570s sell for (though others are available for ~$150). Keep in mind that pricing in China will be a bit higher than the US, on average.

Finding the “best" workstation GPU isn't as straight-forward as finding the best case, best gaming CPU, or best gaming GPU. While games typically scale reliably from one to the next, applications can deliver wildly varying performance. Those gains and losses could be chalked up to architecture, drivers, and also whether or not we're dealing with a true workstation GPU versus a gaming GPU trying to fill-in for workstation purposes.

In this content, we're going to be taking a look at current workstation GPU performance across a range of tests to figure out if there is such thing as a champion among them all. Or, in the very least, we'll figure out how AMD differs from NVIDIA, and how the gaming cards differ from the workstation counterparts. Part of this will look at Quadro vs. RTX or GTX cards, for instance, and WX vs. RX cards for workstation applications. We have GPU benchmarks for video editing (Adobe Premiere), 3D modeling and rendering (Blender, V-Ray, 3ds Max, Maya), AutoCAD, SolidWorks, Redshift, Octane Bench, and more.

Though NVIDIA's Quadro RTX lineup has been available for a few months, review samples have been slow to escape the grasp of NVIDIA, and if we had to guess why, it's likely due to the fact that few software solutions are available that can take advantage of the features right now. That excludes deep-learning tests which can benefit from the Tensor cores, but for optimizations derived from the RT core, we're still waiting. It seems likely that Chaos Group's V-Ray is going to be one of the first plugins to hit the market that will support NVIDIA's RTX, though Redshift, Octane, Arnold, Renderman, and many others have planned support.

The great thing for those planning to go with a gaming GPU for workstation use is that where rendering is concerned, the performance between gaming and workstation cards is going to be largely equivalent. Where performance can improve on workstation cards is with viewport performance optimizations; ultimately, the smoother the viewport, the less tedious it is to manipulate a scene.

Across all of the results ahead, you'll see that there are many angles to view workstation GPUs from, and that there isn't really such thing as a one-size-fits all - not like there is on the gaming side. There is such thing as an ultimate choice though, so if you're not afraid of spending substantially above the gaming equivalents for the best performance, there are models vying for your attention.

The RTX 2080 Ti failures aren’t as widespread as they might have seemed from initial reddit threads, but they are absolutely real. When discussing internally whether we thought the issue of artifacting and dying RTX cards had been blown out of proportion by the internet, we had two frames of mind: On one side, the level of attention did seem disproportionate to the size of the issue, particularly as RMA rates are within the norm. Partners are still often under 1% and retailers are under 3.5%, which is standard. The other frame of mind is that, actually, nothing was blown out of proportion for people who spent $1250 and received a brick in return. For those affected buyers, the artifacting is absolutely a real issue, and it deserves real attention.

This content marks the closing of a storyline for us. We published previous videos detailing a few of the failures on our viewers’ cards (borrowed by GN on loan), including an unrelated issue of a 1350MHz lock and BSOD issue. We also tested cards in our livestream to show what the artifacting looks like, seen here. Today, we’re mostly looking at thermals, firmware, the OS, downclocking impact, and finding a conclusion of what the problem isn’t (rather than what it 100% is).

With over a dozen cards mailed in to us, we had a lot to sort through over the past week. This issue certainly exists in a very real way for those who spent $1200+ on an unusable video card, but it isn’t affecting everyone. It’s far from “widespread,” fortunately, and our present understanding is that RMA rates remain within reason for most of the industry. That said, NVIDIA’s response times to some RMA requests have been slow, from what our viewers have expressed, and replacements can take upwards of a month given supply constraints in some regions. That’s a problem.

This content stars our viewers and readers. We charted the most popular video cards over the launch period for NVIDIA’s RTX devices, as we were curious if GTX or RTX gained the most sales in this time. We’ve also got some AMD data toward the end, but the focus here is on a shifting momentum between Pascal and Turing architectures and what the consumers want.

We’re looking exclusively at what our viewers and readers have purchased over the two-month launch window since RTX was announced. This samples several hundred purchases, but is in no way at all a representative sample of the whole. Keep in mind that we have a lot of sampling biases here, the primary of which is that it’s our audience – that means these are people who are more enthusiast-leaning, likely buy higher end, and probably follow at least some of our suggestions. You can’t extrapolate this data market-wide, but it is an interesting cross-section for our audience.

Although the year is winding down, hardware announcements are still heavy through the mid-point in November: NVIDIA pushed a major driver update and has done well to address BSOD issues, the company has added new suppliers to its memory list (a good thing), and RTX should start getting support once Windows updates roll-out. On the flip-side, AMD is pushing 7nm CPU and GPU discussion as high-end serve parts hit the market.

Show notes below the embedded video.

Hardware news coverage has largely followed the RTX 2080 Ti story over the past week, and it's one of dying cards of unknown quantities online. We have been investigating the issue further and have a few leads on what's going on, but are awaiting for some of the dead cards to arrive at our office before proceeding further. We also learned about Pascal stock depletion, something that's been a curious topic when given the slow uptake on RTX.

Further news items include industry discussion on Intel's outsourcing to TSMC, its hiring of former AMD graphics staff, and dealings with 14nm shortages. Only one rumor is present this week, and that's of the nearly confirmed RX 590.

EVGA’s RTX 2070 XC Ultra gave us an opportunity to compare the differences between NVIDIA’s varied RTX 2070 SKUs, including a low-end TU106-400 and a higher-end TU106-400A. The difference between these, we’ve learned, is one of pre-selection for ability to attain higher clocks. The XC Ultra runs significantly higher under Boost behavior than the 2070 Black does, which means that there’s now more to consider in the $70 price gap between the cards than just the cooler. This appears to be one of the tools available to board partners so that they can reach the $500 MSRP floor, but there is a performance cost as a result. With Pascal, the performance cost effectively boiled-down to one predicated on thermal and power headroom, but not necessarily chip quality. Turing is different, and chip quality is now a potential limiter.

In this review of the EVGA RTX 2070 XC Ultra, we’ll also be discussing performance variability between the two 2070 GPU SKUs. These theories should extrapolate out to other NVIDIA cards with these sub-GPU options. Note that we are just going to focus on the 2070s today. If you want to see how we compare the 2070’s value versus Vega or Pascal, check our 2070 review and Vega 56 power mod content pieces.

The real discussion is going to be in overclocking and thermals, as gaming performance typically isn’t too varied intra-GPU. That said, the GPU changes between these two (technically), so that’ll make for an interesting data point.

Intel broke silence this week in response to media reports that its 10nm process "died," denying the claims outright and reaffirming target delivery for 2019. This follows reports emboldened by Semiaccurate of the discontinuation of the current 10nm process development, a site that previously accurately predicted issues with 10nm production. We've also seen plenty of AMD news items this week, including a slumped earnings report, Vega 20 rumors, and RX 590 rumors.

The shows notes are below the video, as always, for those favoring reading.

We’re resurrecting our AMD RX Vega 56 powerplay tables mod to challenge the RTX 2070, a card that competes in an entirely different price class. It’s a lightweight versus heavyweight boxing match, except the lightweight has a gun.

For our Vega 56 card, priced at between $370 and $400, depending on sales, we will be shoving an extra 200W+ of power into the core to attempt to match the RTX 2070’s stock performance. We strongly praised Vega 56 at launch for its easily modded nature, but the card has faced fierce competition from the 1070 Ti and 1070. It was also constantly out of stock or massively overpriced throughout the mining boom, which acted as a death knell for Vega throughout the mining months. With that now dying down and Vega becoming available for normal people again, pricing is competitive and compelling, and nVidia’s own recent fumbles have created an opening in the market.

We will be working with a PowerColor RX Vega 56 Red Dragon card, a 242% power target, and matching it versus an EVGA RTX 2070 Black. The price difference is about $370-$400 vs. $500-$550, depending on where you buy your parts. We are using registry entries to trick the Vega 56 card into a power limit that exceeds the stock maximum of +50%, allowing us to go to +242%. This was done with the help of Buildzoid last year.

One final note: We must warn that we aren’t sure of the long-term impact of running Vega 56 with this much power going through it. If you want to do this yourself, be advised that long-term damage is a possibility for which we cannot account.

After the post-apocalyptic hellscape that was the RTX 2080 launch, NVIDIA is following it up with lessons learned for the RTX 2070 launch. By and large, technical media took issue with the 2080’s price hike without proper introduction to its namesake feature—that’d be “RTX”—which is still unused on the 2070. This time, however, the RTX 2070 launches at a much more tenable price of $500 to $600, putting it at rough price parity with the GTX 1080 hanger-on stock. It becomes easier to overlook missing features (provided the buyer isn’t purchasing for those features) when price and performance parity are achieved with existing products and rendering techniques. This is what the RTX 2070 looks forward to most.

Our EVGA RTX 2070 Black review will focus on gaming benchmarks vs. the GTX 1070, GTX 970, Vega 64, and other cards, as well as in-depth thermal testing and noise testing. We will not be recapping architecture in this content; instead, we recommend you check out our Turing architecture deep-dive from the RTX 2080 launch.

Page 1 of 29

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge