Hardware Guides

This is the article version of our recent tour of a cable factory in Dongguan, China. The factory is SanDian, used by Cooler Master (and other companies you know) to manufacture front panel connectors, USB cables, Type-C cables, and more. This script was written for the video that's embedded below, but we have also pulled screenshots to make a written version. Note that references to "on screen" will be referring to the video portion.

USB 3.1 Type-C front panel cables are between 4x and 10x more expensive than USB2.0 front panel cables, which explains why Type-C is still somewhat rare in PC cases. For USB 3.1 Gen2 Type-C connectors with fully validated speeds, the cost is about 7x as expensive as the original USB3.0 cables. That cost is all because of how the cables are made: Raw materials have an expense, but there’s also tremendous time expense to manufacture and assemble USB 3.1 Type-C cables. Today’s tour of SanDian, a cable factory that partners with Cooler Master, shows how cables are made. This includes USB 3.1 Type-C, USB 2.0, and front panel connectors. Note that USB 3.1 is being rebranded to USB 3.2 going forward, but it’s the same process.

Our initial AMD Radeon VII liquid cooling mod was modified after the coverage went live. We ended up switching to a Thermaltake Floe 360 radiator (with different fans) due to uneven contact and manufacturing defects in the Alphacool GPX coldplate. Going with the Asetek cooler worked much better, dropping our thermals significantly and allowing increased overclocking and stock boosting headroom. The new drivers (19.2.3) also fixed most of the overclocking defects we originally found, making it possible to actually progress with this mod.

As an important foreword, note that overclocking with AMD’s drivers must be validated with performance at every step of the way. Configured frequencies are not the same as actual frequencies, so you might type “2030MHz” for core and get, for instance, 1950-2000MHz out. For this reason, and because frequency regularly misreports (e.g. “16000MHz”), it is critical that any overclock be validated with performance. Without validation, some “overclocks” can actually be bringing performance below stock while appearing to be boosted in frequency. This is very important for overclocking Radeon VII properly.

We recently revisited the AMD R9 290X from October of 2013, and now it’s time to look back at the GTX 780 Ti from November of 2013. The 780 Ti shipped for $700 MSRP and landed as NVIDIA’s flagship against AMD’s freshly-launched flagship. It was a different era: Memory capacity was limited to 3GB on the 780 Ti, memory frequency was a blazing 7Gbps, and core clock was 875MHz stock or 928MHz boost, using the old Boost 2.0 algorithm that kept a fixed clock in gaming. Overclocking was also more extensible, giving us a bigger upward punch than modern NVIDIA overclocking might permit. Our overclocks on the 780 Ti reference (with fan set to 93%) allowed it to exceed expected performance of the average partner model board, so we have a fairly full range of performance on the 780 Ti.

NVIDIA’s architecture has undergone significant changes since Kepler and the 780 Ti, one of which has been a change in CUDA core efficiency. When NVIDIA moved from Kepler to Maxwell, there was nearly a 40% efficiency gain when CUDA cores are processing input. A 1:1 Maxwell versus Kepler comparison, were such a thing possible, would position Maxwell as superior in efficiency and performance-per-watt, if not just outright performance. It is no surprise then that the 780 Ti’s 2880 CUDA cores, although high even by today’s standards (an RTX 2060 has 1920, but outperforms the 780 Ti), will underperform when compared to modern architectures. This is amplified by significant memory changes, capacity being the most notable, where the GTX 780 Ti’s standard configuration was limited to 3GB and ~7Gbps GDDR5.

Apex Legends is one of the most-watched games right now and is among the top Battle Royale genre of games. Running on the Titanfall engine and with some revamped Titanfall assets, the game is a fast-paced FPS with relatively high poly count models and long view distances. For this reason, we’re benchmarking a series of GPUs to find the “best” video card for Apex Legends at each price category.

Our testing first included some discovery and research benchmarks, where we dug into various multiplayer zones and practice mode to try and find the most heavily loaded areas of the benchmark. We also unlocked FPS for this, so we aren’t going to bump against any 144FPS cap or limitation. This will help find which cards can play the game at max settings – or near-max, anyway.

Metro: Exodus is the next title to include NVIDIA RTX technology, leveraging Microsoft’s DXR. We already looked at the RTX implementation from a qualitative standpoint (in video), talking about the pros and cons of global illumination via RTX, and now we’re back to benchmark the performance from a quantitative standpoint.

The Metro series has long been used as a benchmarking standard. As always, with a built-in benchmark, one of the most important things to look at is the accuracy of that benchmark as it pertains to the “real” game. Being inconsistent with in-game performance doesn’t necessarily invalidate a benchmark’s usefulness, though, it’s just that the light in which that benchmark is viewed must be kept in mind. Without accuracy to in-game performance, the benchmark tools mostly become synthetic benchmarks: They’re good for relative performance measurements between cards, but not necessarily absolute performance. That’s completely fine, too, as that’s mostly what we look for in reviews. The only (really) important thing is that performance scaling is consistent between cards in both pre-built benchmarks and in-game benchmarks.

Finding something to actually leverage the increased memory bandwidth of Radeon VII is a challenge. Few games will genuinely use more memory than what’s found on an RTX 2080, let alone 16GB on the Radeon VII, and most VRAM capacity utilization reporting is wildly inaccurate as it only reports allocated memory and not necessarily used memory. To best benchmark the potential advantages of Radeon VII, which would primarily be relegated to memory bandwidth, we set up a targeted feature test to look at anti-aliasing and high-resolution benchmarks. Consider this an academic exercise on Radeon VII’s capabilities.

The AMD Radeon VII embargo for “unboxings” has lifted and, although we don’t participate in the marketing that is a content-filtered “unboxing,” a regular part of our box-opening process involves taking the product apart. For today, restrictions are placed on performance discussion and product review, but we are free to show the product and handle it physically. You’ll have to check back for the review, which should likely coincide with the release date of February 7.

This content is primarily video, as our tear-downs show the experience of taking the product apart (and discoveries as we go), but we’ll recap the main point of interest here. Text continues after the embedded video:

GPU manufacturer Visiontek is old enough to have accumulated a warehouse of unsold, refurbished cards. Once in a while, they’ll clear stock by selling them off in cheap mystery boxes. It’s been a long time since we last reported on these boxes, and GPU development has moved forward quite a bit, so we wanted to see what we could get for our money. PCIe cards were $10 for higher-end and $5 for lower, and AGP and PCI cards were both $5. On the off chance that Visiontek would recognize Steve’s name and send him better-than-average cards, we placed two identical orders, one in Steve’s name and one in mine (Patrick). Each order was for one better PCIe card, one worse, one PCI, and one AGP.

The AMD R9 290X, a 2013 release, was the once-flagship of the 200 series, later superseded by the 390X refresh, (sort of) the Fury X, and eventually the RX-series cards. The R9 290X typically ran with 4GB of memory, although the 390X made 8GB somewhat commonplace, and was a strong performer for early 1440p gaming and high-quality 1080p gaming. The goal posts have moved, of course, as time has mandated that games get more difficult to render, but the 290X is still a strong enough card to warrant a revisit in 2019.

The R9 290X still has some impressive traits today, and those influence results to a point of being clearly visible at certain resolutions. One of the most noteworthy features is its 64 count of ROPs, where the output is converted into a bitmapped image, and its 176 TMUs. The ROPs assist in improving performance scaling as resolution increases, something that also correlates with higher anti-aliasing values (same idea – sampling more times per pixel or drawing more pixels). For this reason, we’ll want to pay careful attention to performance scaling at 1080p, 1440p, and 4K versus some other device, like the RX 580. The RX 580 is a powerful card for its price-point, often managing comparable performance to the 290X while running half the ROPs and 144 TMUs, but the 290X can close the gap (mildly) at higher resolutions. This isn’t particularly useful to know, but is interesting, and illustrates how specific parts of the GPU can change the performance stack under different rendering conditions.

Today, we’re testing with a reference R9 290X that’s been run through both stock and overclocked, giving us a look at the bottom-end performance and average partner model or OC performance. This should cover most the spectrum of R9 290X cards.

Today’s benchmark is a case study by the truest definition of the phrase: We are benchmarking a single sample, overweight video card to test the performance impact of its severe sag. The Gigabyte GTX 1080 Ti Xtreme was poorly received by our outlet when we reviewed it in 2017, primarily for its needlessly large size that amounted to worse thermal and acoustic performance than smaller, cheaper competitors. The card is heavy and constructed using through-bolts and complicated assortments of hardware, whereas competition achieved smaller, more effective designs that didn’t sag.

As is tradition, we put the GTX 1080 Ti Xtreme in one of our production machines alongside all of the other worst hardware we worked with, and so the 1080 Ti Xtreme was in use in a “real” system for about a year. That amount of time has allowed nature – mostly gravity – to take its course, and so the passage of time has slowly pulled the 1080 Ti Xtreme apart. Now, after a year of forced labor in our oldest rendering rig, we get to see the real side-effects of a needlessly heavy card that’s poorly reinforced internally. We’ll be testing the impact of GPU sag in today’s content.

Page 1 of 21

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge