Hardware Guides

Finding the “best" workstation GPU isn't as straight-forward as finding the best case, best gaming CPU, or best gaming GPU. While games typically scale reliably from one to the next, applications can deliver wildly varying performance. Those gains and losses could be chalked up to architecture, drivers, and also whether or not we're dealing with a true workstation GPU versus a gaming GPU trying to fill-in for workstation purposes.

In this content, we're going to be taking a look at current workstation GPU performance across a range of tests to figure out if there is such thing as a champion among them all. Or, in the very least, we'll figure out how AMD differs from NVIDIA, and how the gaming cards differ from the workstation counterparts. Part of this will look at Quadro vs. RTX or GTX cards, for instance, and WX vs. RX cards for workstation applications. We have GPU benchmarks for video editing (Adobe Premiere), 3D modeling and rendering (Blender, V-Ray, 3ds Max, Maya), AutoCAD, SolidWorks, Redshift, Octane Bench, and more.

Though NVIDIA's Quadro RTX lineup has been available for a few months, review samples have been slow to escape the grasp of NVIDIA, and if we had to guess why, it's likely due to the fact that few software solutions are available that can take advantage of the features right now. That excludes deep-learning tests which can benefit from the Tensor cores, but for optimizations derived from the RT core, we're still waiting. It seems likely that Chaos Group's V-Ray is going to be one of the first plugins to hit the market that will support NVIDIA's RTX, though Redshift, Octane, Arnold, Renderman, and many others have planned support.

The great thing for those planning to go with a gaming GPU for workstation use is that where rendering is concerned, the performance between gaming and workstation cards is going to be largely equivalent. Where performance can improve on workstation cards is with viewport performance optimizations; ultimately, the smoother the viewport, the less tedious it is to manipulate a scene.

Across all of the results ahead, you'll see that there are many angles to view workstation GPUs from, and that there isn't really such thing as a one-size-fits all - not like there is on the gaming side. There is such thing as an ultimate choice though, so if you're not afraid of spending substantially above the gaming equivalents for the best performance, there are models vying for your attention.

As we get into the holiday spirit here at GN, it’s time for our year-end round-ups and best of series—probably some of our favorite content. These guides provide a snapshot of what the year had to offer in certain spaces, like SSDs, for instance. You can check our most recent guides for the Best Cases of 2018 and Best CPUs of 2018.

These guides will also help users navigate the overwhelming amount of Black Friday and Cyber Monday marketing ahead of us all. SSD prices have been especially good lately, and the holidays should certainly net opportunities for even better deals.

That said, buying something just because it’s cheap isn’t ever a good idea, really; better to know what’s best first, then buy cheap—or cheaper than usual, anyway. This guide will take the legwork out of distinguishing what the year’s best SSDs are based on use case and price. Today, we're looking at the best SSDs for gaming PCs, workstations, budget PC builds, and for cheap, high-capacity storage. 1TB SSDs are more affordable than ever now, and we'll explore some of those listings.

As we continue our awards shows for end of year (see also: Best Cases of 2018), we’re now recapping some of the best and worst CPU launches of the year. The categories include best overall value, most well-rounded, best hobbyist production, best budget gaming, most fun to overclock, and biggest disappointment. We’ll be walking through a year of testing data as we recap the most memorable products leading into Black Friday and holiday sales. As always, links to the products are provided below, alongside our article for a written recap. The video is embedded for the more visual audience.

We’ll be mailing out GN Award Crystals to the companies for their most important products for the year. The award crystal is a 3D laser-engraved GN tear-down logo with extreme attention to detail and, although the products have to earn the award, you can buy one for yourself at store.gamersnexus.net.

As a reminder here, data isn’t the focus today. We’re recapping coverage, so we’re pulling charts sparingly and as needed from a year’s worth of CPU reviews. For those older pieces, keep in mind that some of the tests are using older data. For full detail on any CPU in this video, you’ll want to check our original reviews. Keep in mind that the most recent review – that’ll be the 9600K or 9980XE review – will contain the most up-to-date test data with the most up-to-date Windows and game versions.

Awards Show: Best & Worst PC Cases of 2018

By Published November 20, 2018 at 3:57 pm

It’s time for the annual GN Awards series, starting off with the best – and worst – cases of 2018. Using our database of over 160 test results for cases, we crawled through our reviews for the year to pull cases that had the best out of the box thermals, the best noise levels, best quality at a budget, best design, best all-around, the most overhyped case, and the most disappointing cases. We hit every price category in this round-up and cover cases that are both subjective and objectively good. Links will be provided for anyone shopping this season.

Leading into Black Friday and Cyber Monday, let's walk through the best and worst PC cases of 2018.

Every manufacturer featured in this content will receive one of our Large GN Awards for the Best Of categories – no award for the worst categories, sadly. The GN Award Crystal is only given out for prestige, featuring a detailed 3D laser-engraved GN tear-down logo with fine detail, like VRM components, fans, and electrical circuitry in the design. Although manufacturers have to earn their award, you can buy one for yourself on store.gamersnexus.net in large and medium sizes.

The RTX 2080 Ti failures aren’t as widespread as they might have seemed from initial reddit threads, but they are absolutely real. When discussing internally whether we thought the issue of artifacting and dying RTX cards had been blown out of proportion by the internet, we had two frames of mind: On one side, the level of attention did seem disproportionate to the size of the issue, particularly as RMA rates are within the norm. Partners are still often under 1% and retailers are under 3.5%, which is standard. The other frame of mind is that, actually, nothing was blown out of proportion for people who spent $1250 and received a brick in return. For those affected buyers, the artifacting is absolutely a real issue, and it deserves real attention.

This content marks the closing of a storyline for us. We published previous videos detailing a few of the failures on our viewers’ cards (borrowed by GN on loan), including an unrelated issue of a 1350MHz lock and BSOD issue. We also tested cards in our livestream to show what the artifacting looks like, seen here. Today, we’re mostly looking at thermals, firmware, the OS, downclocking impact, and finding a conclusion of what the problem isn’t (rather than what it 100% is).

With over a dozen cards mailed in to us, we had a lot to sort through over the past week. This issue certainly exists in a very real way for those who spent $1200+ on an unusable video card, but it isn’t affecting everyone. It’s far from “widespread,” fortunately, and our present understanding is that RMA rates remain within reason for most of the industry. That said, NVIDIA’s response times to some RMA requests have been slow, from what our viewers have expressed, and replacements can take upwards of a month given supply constraints in some regions. That’s a problem.

Respected manufacturers of silence-focused PC cases like be quiet! and Fractal Design use a number of tricks to keep noise levels down. These often include specially designed fans, thick pads of noise-damping foam, sealed front panels, and elaborately baffled vents. We tend to prefer high airflow to silence when given a choice, and it usually is presented that way: as a choice. The reality is that it doesn’t have to be a choice, and that an airflow-oriented case can, with minor work, achieve equivalent noise levels to a silence-focused case (while offering better thermals).

Our testing tends to reinforce that idea of a choice: our baseline results are measured with the case fans at maximum speed and therefore maximum noise, making cases like the SilverStone RL06 sound like jet engines. The baseline torture tests are good for consistency, showcasing maximum performance, and for highlighting the performance differences between cases, but they don’t represent how most users run their PCs for 24/7 usage. Instead, most users would likely turn down the fans to an acceptable noise level--maybe even the same level as intentionally quiet cases like the Silent Base 601.

Our thesis for this benchmark paper proposes that fans can be turned down sufficiently to equate noise levels of a silence-focused case, but while still achieving superior thermal performance. The candidates chosen as a case study were the Silverstone Redline 06 and the be quiet! Silent Base 601. The RL06 is one of the best-ventilated and noisiest cases we’ve tested in the past couple of years, while the SB601 is silence-focused with restricted airflow.

One variable that we aren’t equipped to measure is the type of noise. Volume is one thing, but the frequency and subjective annoying-ness matter too. For the most part, noise damping foam addresses concerns of high-frequency whines and shorter wavelengths, while thicker paneling addresses low-frequency hums and longer wavelengths. For today’s testing, we are entirely focusing on noise level at 20” and testing thermals at normalized volumes.

Intel’s TDP has long been questioned, but this particular generation put the 95W TDP under fire as users noticed media outlets measuring power consumption at well over 100W on most boards. It isn’t uncommon to see the 9900K at 150W or more in some AVX workloads, like Blender, thus far-and-away exceeding the 95W number. Aside from TDP being an imperfect specification for power, there’s also a lot that isn’t understood about it – including by motherboard manufacturers, apparently. All manufacturers are exceeding Intel guidance for the Turbo boosting duration in some way, which is causing the uncharacteristically high power consumption that produces unfairly advantaged performance results. The other end of this is that the 9900K looks much hotter in some tests.

We previously deep-dived on MCE (Multi-Core Enhancement) practices with the 8700K, revealing the performance variance that can occur when motherboard makers “cheat” results by boosting CPUs out of spec. MCE has become less of a problem with Z390 – namely because it is now disabled by default on all boards we’ve tested – but boosted BCLKs are the new issue.

If you think Cinebench is a reliable benchmark, we’ve got a histogram of all of our test results for the Intel i9-9900K at presumably stock settings:

1 z390 motherboard differences cinebench histogram

(Yes, the scale starts at non-0 -- given a range of results of 1976 to 2300, we had to zoom-in on the axis for a better histogram view)

The scale is shrunken and non-0 as the results are so tightly clustered, but you can still see that we’re ranging from 1970 cb marks to 2300 cb marks, which is a massive range. That’s the difference between a heavily overclocked R7 2700 and an overclocked 7900X, except this is all on a single CPU. The only difference is that we used 5 different motherboards for these tests, along with a mix of auto, XMP, and MCE settings. The discussion today focuses on when it is considered “cheating” to modify CPU settings via BIOS without the user’s awareness of those changes. The most common change is to the base clock, where BIOS might report a value of 100.00, but actually produce a value of 100.8 or 100.9 on the CPU. This functionally pre-overclocks it, but does so in a way that is hard for most users to ever notice.

We’re resurrecting our AMD RX Vega 56 powerplay tables mod to challenge the RTX 2070, a card that competes in an entirely different price class. It’s a lightweight versus heavyweight boxing match, except the lightweight has a gun.

For our Vega 56 card, priced at between $370 and $400, depending on sales, we will be shoving an extra 200W+ of power into the core to attempt to match the RTX 2070’s stock performance. We strongly praised Vega 56 at launch for its easily modded nature, but the card has faced fierce competition from the 1070 Ti and 1070. It was also constantly out of stock or massively overpriced throughout the mining boom, which acted as a death knell for Vega throughout the mining months. With that now dying down and Vega becoming available for normal people again, pricing is competitive and compelling, and nVidia’s own recent fumbles have created an opening in the market.

We will be working with a PowerColor RX Vega 56 Red Dragon card, a 242% power target, and matching it versus an EVGA RTX 2070 Black. The price difference is about $370-$400 vs. $500-$550, depending on where you buy your parts. We are using registry entries to trick the Vega 56 card into a power limit that exceeds the stock maximum of +50%, allowing us to go to +242%. This was done with the help of Buildzoid last year.

One final note: We must warn that we aren’t sure of the long-term impact of running Vega 56 with this much power going through it. If you want to do this yourself, be advised that long-term damage is a possibility for which we cannot account.

After our launch-day investigation into delidding the 9900K and finding its shortcomings, we’ve been working on a follow-up involving lapping the inside of the IHS and applying liquid metal to close the story on improvement potential with the delid process. We’re also returning to bring everyone back to reality on delidding the 9900K, because it’s not as easy as it may look from what you’re seeing online.

We already know that it’s possible to see performance improvement, based on our previous content and Roman’s own testing, but we’ve also said that Intel’s solder is an improvement over its previous Dow Corning paste. Considering that, in our testing, high-end Hydronaut paste performs nearing the solder, that’s good news when compared to the older thermal compound. Intel also needed to make that change for more thermal headroom, so everyone benefits – but it is possible to outperform it.

Page 1 of 20

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge