CaseLabs Revisit: What We Lost, Ft. Magnum SMA-8

By Published April 02, 2019 at 11:06 pm

CaseLabs was a small manufacturer of high-end PC cases that went out of business in August of last year, bankrupted by a combination of new (American) tariffs and the loss of a major account, not to mention an ongoing legal battle with Thermaltake. We’d been in contact with CaseLabs in the months leading to the company’s demise and received one of the SMA8-A Magnum enclosures for review. With about a month to spare before the company shuttered, we knew no better that it’d soon be over for CaseLabs, and as we were in the middle of a move into our office, we shelved the review until the dust settled. By the time that dust settled, the company was done for. It stopped being a priority after that (since reviews of products that nobody can buy aren’t especially helpful), and it’s been sitting in storage ever since, unopened. Now that even more time has passed, it’s worth a revisit to see what everyone is missing out on with CaseLabs gone.

For this “review,” we’re really focusing more on build quality, some basic history, and looking at what we lost from CaseLabs’ unique approach to cases. We typically focus case reviews on thermals and acoustics, not on boutique, ultra-expensive cases, and so our review process is not well-suited for the CaseLabs SMA8. This case is meant for servers (we’re building one in the case now) or for dual-loop liquid setups, so our standard review test bench really doesn’t work here -- it fits, technically, and we did do some thermal tests for posterity, but that’s not at all the focus of what we’re doing.

Industry news isn't always as appealing as product news for some of our audience, but this week of industry news is interesting: For one, Tom Petersen, Distinguished Engineer at NVIDIA, will be moving to Intel; for two, ASUS accidentally infected its users with malware after previously being called-out for poor security practices. Show notes for the news below the video embed, for those who prefer written format.

This is the article version of our recent tour of a cable factory in Dongguan, China. The factory is SanDian, used by Cooler Master (and other companies you know) to manufacture front panel connectors, USB cables, Type-C cables, and more. This script was written for the video that's embedded below, but we have also pulled screenshots to make a written version. Note that references to "on screen" will be referring to the video portion.

USB 3.1 Type-C front panel cables are between 4x and 10x more expensive than USB2.0 front panel cables, which explains why Type-C is still somewhat rare in PC cases. For USB 3.1 Gen2 Type-C connectors with fully validated speeds, the cost is about 7x as expensive as the original USB3.0 cables. That cost is all because of how the cables are made: Raw materials have an expense, but there’s also tremendous time expense to manufacture and assemble USB 3.1 Type-C cables. Today’s tour of SanDian, a cable factory that partners with Cooler Master, shows how cables are made. This includes USB 3.1 Type-C, USB 2.0, and front panel connectors. Note that USB 3.1 is being rebranded to USB 3.2 going forward, but it’s the same process.

Our hardware news coverage has some more uplifting stories this week, primarily driven by the steepest price drop in DRAM since 2011. System builders who've looked on in horror as prices steadily climbed to 2x and 3x the 2016 rate may finally find some peace in 2019's price projections, most of which are becoming reality with each passing week. Other news is less positive, like that of Intel's record CPU shortages causing further trouble for the wider-reaching partnerships, or hackers exploiting WinRAR, but it can't all be good.

Find the show notes below the video embed, as always.

We’re still in China for our factory and lab tours, but we managed to coordinate with home base to get enough testing on the GTX 1660 done that a review became possible. Patrick ran the tests this time, then we just put the charts and script together from Dongguan, China.

This is a partner launch, so no NVIDIA direct sampling was done and, to our knowledge, no Founders Edition board will exist. Reference PCBs will exist, as always, but partners have control over most of the cooler design for this launch.

Our review will look at the EVGA GTX 1660 dual-fan model, which has an MSRP of $250 and lands $30 cheaper than the baseline GTX 1660 Ti pricing. The cheapest GTX 1660s will sell for about $220, but our $250 unit today has a higher power target allowance for overclocking and a better cooler. The higher power target is the most interesting, as overclocking performance can stretch upwards toward a GTX 1660 Ti at the $280 price-point.

We’ll get straight to the review today. Our focus will be on games, with some additional thermal and power tests toward the end. Again, as a reminder, we’re doing this remotely, so we don’t have as many non-gaming charts as normally, but we still have a complete review.

The Corsair Crystal 680X is the newer, larger sibling to the 280X, a micro-ATX case that we reviewed back in June. The similarity in appearance is obvious, but Corsair has used the past year to make many changes, and the result is something more than just a scaled-up 280X and perhaps closer to a Lian Li O11 Dynamic.

First is the door, which is a step up from the old version. Instead of four thumbscrews, the panel is set on hinges and held shut with a magnet. This is a better-looking and better-functioning option. It’d be nice to have a way to lock the door in place even more securely during transportation, but that’s a minor issue and systems of this size rarely move.

Removing the front panel is a more elaborate process than usual, but it’s also unnecessary. The filter and fans are both mounted on a removable tray, and everything else is easily accessible through the side of the case. Fan trays (or radiator brackets, or whatever you want to call them) are always an improvement. If for some reason the panel does need to be removed, it involves removing three screws from inside the case, popping the plastic section off, and removing a further four screws from outside. The plastic half is held on by metal clips that function the same way as the plastic clips in the 280X, but are easier to release. Despite appearances, the glass pane is still not intended to be slid out, although it could be freed from its frame by removing many more screws.

Our initial AMD Radeon VII liquid cooling mod was modified after the coverage went live. We ended up switching to a Thermaltake Floe 360 radiator (with different fans) due to uneven contact and manufacturing defects in the Alphacool GPX coldplate. Going with the Asetek cooler worked much better, dropping our thermals significantly and allowing increased overclocking and stock boosting headroom. The new drivers (19.2.3) also fixed most of the overclocking defects we originally found, making it possible to actually progress with this mod.

As an important foreword, note that overclocking with AMD’s drivers must be validated with performance at every step of the way. Configured frequencies are not the same as actual frequencies, so you might type “2030MHz” for core and get, for instance, 1950-2000MHz out. For this reason, and because frequency regularly misreports (e.g. “16000MHz”), it is critical that any overclock be validated with performance. Without validation, some “overclocks” can actually be bringing performance below stock while appearing to be boosted in frequency. This is very important for overclocking Radeon VII properly.

Hardware news is busy this week, as it always is, but we also have some news of our own. Part of GN's team will be in Taiwan and China over the next few weeks, with the rest at home base taking care of testing. For the Taiwan and China trip, we'll be visiting numerous factories for tour videos, walkthroughs, and showcases of how products are made at a lower-level. We have several excursions to tech landmarks also planned, so you'll want to check back regularly as we make this special trip. Check our YT channel daily for uploads. The trip to Asia will likely start its broadcast around 3/6 for us.

The best news of the week is undoubtedly the expected and continued decrease in memory prices, particularly DRAM prices, as 2019 trudges onward. DRAMeXchange, the leading source of memory prices in the industry, now points toward an overall downtrend in pricing even for desktop system memory. This follows significantly inflated memory prices of the past few years, which was predated by yet unprecedented low prices c. 2016. Aside from this (uplifting) news topic, we also talk about the GN #SomethingPositive charity drive, AMD's price clarifications on Vega, and WinRAR's elimination of a 14-year-old exploit that has existed in third party libraries in its software.

Show notes below the video embed, as always.

We recently revisited the AMD R9 290X from October of 2013, and now it’s time to look back at the GTX 780 Ti from November of 2013. The 780 Ti shipped for $700 MSRP and landed as NVIDIA’s flagship against AMD’s freshly-launched flagship. It was a different era: Memory capacity was limited to 3GB on the 780 Ti, memory frequency was a blazing 7Gbps, and core clock was 875MHz stock or 928MHz boost, using the old Boost 2.0 algorithm that kept a fixed clock in gaming. Overclocking was also more extensible, giving us a bigger upward punch than modern NVIDIA overclocking might permit. Our overclocks on the 780 Ti reference (with fan set to 93%) allowed it to exceed expected performance of the average partner model board, so we have a fairly full range of performance on the 780 Ti.

NVIDIA’s architecture has undergone significant changes since Kepler and the 780 Ti, one of which has been a change in CUDA core efficiency. When NVIDIA moved from Kepler to Maxwell, there was nearly a 40% efficiency gain when CUDA cores are processing input. A 1:1 Maxwell versus Kepler comparison, were such a thing possible, would position Maxwell as superior in efficiency and performance-per-watt, if not just outright performance. It is no surprise then that the 780 Ti’s 2880 CUDA cores, although high even by today’s standards (an RTX 2060 has 1920, but outperforms the 780 Ti), will underperform when compared to modern architectures. This is amplified by significant memory changes, capacity being the most notable, where the GTX 780 Ti’s standard configuration was limited to 3GB and ~7Gbps GDDR5.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge