Hardware Guides

Today’s benchmark is a case study by the truest definition of the phrase: We are benchmarking a single sample, overweight video card to test the performance impact of its severe sag. The Gigabyte GTX 1080 Ti Xtreme was poorly received by our outlet when we reviewed it in 2017, primarily for its needlessly large size that amounted to worse thermal and acoustic performance than smaller, cheaper competitors. The card is heavy and constructed using through-bolts and complicated assortments of hardware, whereas competition achieved smaller, more effective designs that didn’t sag.

As is tradition, we put the GTX 1080 Ti Xtreme in one of our production machines alongside all of the other worst hardware we worked with, and so the 1080 Ti Xtreme was in use in a “real” system for about a year. That amount of time has allowed nature – mostly gravity – to take its course, and so the passage of time has slowly pulled the 1080 Ti Xtreme apart. Now, after a year of forced labor in our oldest rendering rig, we get to see the real side-effects of a needlessly heavy card that’s poorly reinforced internally. We’ll be testing the impact of GPU sag in today’s content.

We’re revisiting the Intel i7-7700K today, following its not-so-distant launch of January of 2017 for about $340 USD. The 7700K was shortly followed by the i7-8700K, still selling well, which later in the same year but with an additional two cores and four threads. That was a big gain, and one which stacked atop the 7700K’s already relatively high overclocking potential and regular 4.9 to 5GHz OCs. This revisit looks at how the 7700K compares to modern Coffee Lake 8000 and 9000 CPUs (like the 9700K), alongside modern Ryzen CPUs from the Zen+ generation.

For a quick reminder of 7700K specs versus “modern” CPUs – or, at least, as much more “modern” as a 1-year-later launch is – remember that the 7700K was the last of the 4C/8T parts in the i7 line, still using hyper-threading to hit 8T. The 8700K was the next launch in the family, releasing at 6C/12T and changing the lineup substantially at a similar price-point, albeit slightly higher. The 9900K was the next remarkable launch but exited the price category and became more of a low-end HEDT CPU. The 9700K is the truer follow-up to the 7700K, but oddly regresses to an 8T configuration from the 8700K’s 12T configuration, except it instead uses 8 physical cores for all 8 threads, rather than 6 physical cores. Separately, the 7700K critically operated with 8MB of total cache, as opposed to 12MB on the 9700K. The price also changed, with the 7700K closer to $340 and the 9700K at $400 to $430, depending. Even taking the $400 mark, that’s more than adjustment for inflation.

We’re revisiting the 7700K today, looking at whether buyers truly got the short straw with the subsequent and uncharacteristically rapid release of the 8700K. Note also, however, that the 8700K didn’t really properly release at end of 2017. That was more of a paper launch, with few products actually available at launch. Regardless, the feeling is the same for the 7700K buyer.

We already reviewed an individual NVIDIA Titan RTX over here, used first for gaming, overclocking, thermal, power, and acoustic testing. We may look at production workloads later, but that’ll wait. We’re primarily waiting for our go-to applications to add RT and Tensor Core support for 3D art. After replacing our bugged Titan RTX (the one that was clock-locked), we were able to proceed with SLI (NVLink) testing for the dual Titan RTX cards. Keep in mind that NVLink is no different from SLI when using these gaming bridges, aside from increased bandwidth, and so we still rely upon AFR and independent resources.

As a reminder, these cards really aren’t built for the way we’re testing them. You’d want a Titan RTX card as a cheaper alternative to Quadros, but with the memory capacity to handle heavy ML/DL or rendering workloads. For games, that extra (expensive) memory goes unused, thus demeaning the value of the Titan RTX cards in the face of a single 2080 Ti.

This is really just for fun, in all honesty. We’ll look at a theoretical “best” gaming GPU setup today, then talk about what you should buy instead.

Finding the “best" workstation GPU isn't as straight-forward as finding the best case, best gaming CPU, or best gaming GPU. While games typically scale reliably from one to the next, applications can deliver wildly varying performance. Those gains and losses could be chalked up to architecture, drivers, and also whether or not we're dealing with a true workstation GPU versus a gaming GPU trying to fill-in for workstation purposes.

In this content, we're going to be taking a look at current workstation GPU performance across a range of tests to figure out if there is such thing as a champion among them all. Or, in the very least, we'll figure out how AMD differs from NVIDIA, and how the gaming cards differ from the workstation counterparts. Part of this will look at Quadro vs. RTX or GTX cards, for instance, and WX vs. RX cards for workstation applications. We have GPU benchmarks for video editing (Adobe Premiere), 3D modeling and rendering (Blender, V-Ray, 3ds Max, Maya), AutoCAD, SolidWorks, Redshift, Octane Bench, and more.

Though NVIDIA's Quadro RTX lineup has been available for a few months, review samples have been slow to escape the grasp of NVIDIA, and if we had to guess why, it's likely due to the fact that few software solutions are available that can take advantage of the features right now. That excludes deep-learning tests which can benefit from the Tensor cores, but for optimizations derived from the RT core, we're still waiting. It seems likely that Chaos Group's V-Ray is going to be one of the first plugins to hit the market that will support NVIDIA's RTX, though Redshift, Octane, Arnold, Renderman, and many others have planned support.

The great thing for those planning to go with a gaming GPU for workstation use is that where rendering is concerned, the performance between gaming and workstation cards is going to be largely equivalent. Where performance can improve on workstation cards is with viewport performance optimizations; ultimately, the smoother the viewport, the less tedious it is to manipulate a scene.

Across all of the results ahead, you'll see that there are many angles to view workstation GPUs from, and that there isn't really such thing as a one-size-fits all - not like there is on the gaming side. There is such thing as an ultimate choice though, so if you're not afraid of spending substantially above the gaming equivalents for the best performance, there are models vying for your attention.

As we get into the holiday spirit here at GN, it’s time for our year-end round-ups and best of series—probably some of our favorite content. These guides provide a snapshot of what the year had to offer in certain spaces, like SSDs, for instance. You can check our most recent guides for the Best Cases of 2018 and Best CPUs of 2018.

These guides will also help users navigate the overwhelming amount of Black Friday and Cyber Monday marketing ahead of us all. SSD prices have been especially good lately, and the holidays should certainly net opportunities for even better deals.

That said, buying something just because it’s cheap isn’t ever a good idea, really; better to know what’s best first, then buy cheap—or cheaper than usual, anyway. This guide will take the legwork out of distinguishing what the year’s best SSDs are based on use case and price. Today, we're looking at the best SSDs for gaming PCs, workstations, budget PC builds, and for cheap, high-capacity storage. 1TB SSDs are more affordable than ever now, and we'll explore some of those listings.

As we continue our awards shows for end of year (see also: Best Cases of 2018), we’re now recapping some of the best and worst CPU launches of the year. The categories include best overall value, most well-rounded, best hobbyist production, best budget gaming, most fun to overclock, and biggest disappointment. We’ll be walking through a year of testing data as we recap the most memorable products leading into Black Friday and holiday sales. As always, links to the products are provided below, alongside our article for a written recap. The video is embedded for the more visual audience.

We’ll be mailing out GN Award Crystals to the companies for their most important products for the year. The award crystal is a 3D laser-engraved GN tear-down logo with extreme attention to detail and, although the products have to earn the award, you can buy one for yourself at store.gamersnexus.net.

As a reminder here, data isn’t the focus today. We’re recapping coverage, so we’re pulling charts sparingly and as needed from a year’s worth of CPU reviews. For those older pieces, keep in mind that some of the tests are using older data. For full detail on any CPU in this video, you’ll want to check our original reviews. Keep in mind that the most recent review – that’ll be the 9600K or 9980XE review – will contain the most up-to-date test data with the most up-to-date Windows and game versions.

Awards Show: Best & Worst PC Cases of 2018

By Published November 20, 2018 at 3:57 pm

It’s time for the annual GN Awards series, starting off with the best – and worst – cases of 2018. Using our database of over 160 test results for cases, we crawled through our reviews for the year to pull cases that had the best out of the box thermals, the best noise levels, best quality at a budget, best design, best all-around, the most overhyped case, and the most disappointing cases. We hit every price category in this round-up and cover cases that are both subjective and objectively good. Links will be provided for anyone shopping this season.

Leading into Black Friday and Cyber Monday, let's walk through the best and worst PC cases of 2018.

Every manufacturer featured in this content will receive one of our Large GN Awards for the Best Of categories – no award for the worst categories, sadly. The GN Award Crystal is only given out for prestige, featuring a detailed 3D laser-engraved GN tear-down logo with fine detail, like VRM components, fans, and electrical circuitry in the design. Although manufacturers have to earn their award, you can buy one for yourself on store.gamersnexus.net in large and medium sizes.

The RTX 2080 Ti failures aren’t as widespread as they might have seemed from initial reddit threads, but they are absolutely real. When discussing internally whether we thought the issue of artifacting and dying RTX cards had been blown out of proportion by the internet, we had two frames of mind: On one side, the level of attention did seem disproportionate to the size of the issue, particularly as RMA rates are within the norm. Partners are still often under 1% and retailers are under 3.5%, which is standard. The other frame of mind is that, actually, nothing was blown out of proportion for people who spent $1250 and received a brick in return. For those affected buyers, the artifacting is absolutely a real issue, and it deserves real attention.

This content marks the closing of a storyline for us. We published previous videos detailing a few of the failures on our viewers’ cards (borrowed by GN on loan), including an unrelated issue of a 1350MHz lock and BSOD issue. We also tested cards in our livestream to show what the artifacting looks like, seen here. Today, we’re mostly looking at thermals, firmware, the OS, downclocking impact, and finding a conclusion of what the problem isn’t (rather than what it 100% is).

With over a dozen cards mailed in to us, we had a lot to sort through over the past week. This issue certainly exists in a very real way for those who spent $1200+ on an unusable video card, but it isn’t affecting everyone. It’s far from “widespread,” fortunately, and our present understanding is that RMA rates remain within reason for most of the industry. That said, NVIDIA’s response times to some RMA requests have been slow, from what our viewers have expressed, and replacements can take upwards of a month given supply constraints in some regions. That’s a problem.

Respected manufacturers of silence-focused PC cases like be quiet! and Fractal Design use a number of tricks to keep noise levels down. These often include specially designed fans, thick pads of noise-damping foam, sealed front panels, and elaborately baffled vents. We tend to prefer high airflow to silence when given a choice, and it usually is presented that way: as a choice. The reality is that it doesn’t have to be a choice, and that an airflow-oriented case can, with minor work, achieve equivalent noise levels to a silence-focused case (while offering better thermals).

Our testing tends to reinforce that idea of a choice: our baseline results are measured with the case fans at maximum speed and therefore maximum noise, making cases like the SilverStone RL06 sound like jet engines. The baseline torture tests are good for consistency, showcasing maximum performance, and for highlighting the performance differences between cases, but they don’t represent how most users run their PCs for 24/7 usage. Instead, most users would likely turn down the fans to an acceptable noise level--maybe even the same level as intentionally quiet cases like the Silent Base 601.

Our thesis for this benchmark paper proposes that fans can be turned down sufficiently to equate noise levels of a silence-focused case, but while still achieving superior thermal performance. The candidates chosen as a case study were the Silverstone Redline 06 and the be quiet! Silent Base 601. The RL06 is one of the best-ventilated and noisiest cases we’ve tested in the past couple of years, while the SB601 is silence-focused with restricted airflow.

One variable that we aren’t equipped to measure is the type of noise. Volume is one thing, but the frequency and subjective annoying-ness matter too. For the most part, noise damping foam addresses concerns of high-frequency whines and shorter wavelengths, while thicker paneling addresses low-frequency hums and longer wavelengths. For today’s testing, we are entirely focusing on noise level at 20” and testing thermals at normalized volumes.

Intel’s TDP has long been questioned, but this particular generation put the 95W TDP under fire as users noticed media outlets measuring power consumption at well over 100W on most boards. It isn’t uncommon to see the 9900K at 150W or more in some AVX workloads, like Blender, thus far-and-away exceeding the 95W number. Aside from TDP being an imperfect specification for power, there’s also a lot that isn’t understood about it – including by motherboard manufacturers, apparently. All manufacturers are exceeding Intel guidance for the Turbo boosting duration in some way, which is causing the uncharacteristically high power consumption that produces unfairly advantaged performance results. The other end of this is that the 9900K looks much hotter in some tests.

Page 1 of 21

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge