Liquid-cooled video cards have carved-out a niche in the performance market, granting greater power efficiency through mitigation of power leakage, substantially reduced thermals, and improved overclocking headroom. We've previously talked about the EVGA GTX 980 Ti Hybrid and AMD R9 Fury X, both of which exhibited substantially bolstered performance over previous top-of-line models. More manufacturers have seen the potential for liquid-cooled graphics, with MSI and Corsair now joining forces to produce their own 980 Ti + CLC combination.

This joint venture by MSI and Corsair sees the creation of a liquid-cooled GTX 980 Ti, using the existing Corsair H55 CLC ($60), an Asetek-supplied CLC. Depending on which company you're asking, the graphics card is named either the “MSI Sea Hawk GTX 980 Ti” ($750) or “Corsair Hydro HFX 980 Ti.” Both will have independent listings on retail websites. The cards are identical aside from the branding initiatives. The MSI & Corsair solution sees employment of what is typically a CPU liquid cooler, bracketing the H55 CLC to the GPU using a Corsair HG10 GPU CLC mount. EVGA's solution, meanwhile, uses a CLC with an extruded coldplate for GPU-specific package sizes, which could impact cooling. We'll look into that below.

For purposes of this review, we'll refer to the card interchangeably between the Hydro GFX and Sea Hawk. Our MSI Sea Hawk GTX 980 Ti review benchmarks gaming (FPS) performance vs. the EVGA 980 Ti Hybrid, temperatures, overclocking, power consumption, and value. The liquid-cooled 980 Ti cards are in a class of their own, exceeding base 980 Ti price by a minimum of $50 across all manufacturers. We're pitting the EVGA 980 Ti Hybrid against the MSI Sea Hawk in a head-to-head comparison within this benchmark.

Our most recent interview with Cloud Imperium Games' Chris Roberts became a two-parter, following an initial discussion on DirectX 12 and Vulkan APIs. Part two dives deeper into the render pipeline, network and render optimization, zoning, data organization, and other low-level topics relating to Star Citizen. A lot of that content strays from direct Star Citizen discussion, but covers the underlying framework and “behind-the-scenes” development operations.

Previous encounters with Roberts have seen us discussing the game's zoning & instancing plans in great depth; since then, the Roberts has brought-up the system numerous times, expressing similar excitement each time. It is clear to us that the zoning and instancing architecture have required a clever approach to problem solving, evidenced to us by a previous pre-interview conversation with the CIG CEO. In a pre-shoot talk, Roberts told us that he “loves engineering problems,” and had considered the instancing system to be one of the larger engineering challenges facing Star Citizen. The topic of instancing was again revisited in this sit-down, though at a lower, more technical level.

A notebook-desktop graphics disparity has generally relegated portables to custom, low-performance silicon when matched against desktop alternatives. The limitation is almost entirely tied to the thermal and energy limitations sanctioned by a small box – especially for laptops aiming for a sub-1” thickness. For all the laptops we've helped readers reflow and for which we've refreshed thermal compound, it's clear that there's good reason to reduce the thermal envelope of a mobile GPU.

CPU-GPU thermal equilibrium is often achieved in notebooks resultant of a shared cooling solution, normally a single copper heatpipe that feeds into a single fan for dissipation. Until recently – the revolution of lower-TDP components spurred on by nVidia, Intel, and now AMD – the primary bottleneck to a notebook's performance has been that thermal headroom. Manufacturer optimizations in the silicon have improved per-Watt performance to a point of seeing tremendous gains with, for instance, the GTX 980M – but even the 980M exhibited a performance deficit of 35% against a “real” GTX 980.

NVidia aims to change that. The newest product in the company's lineup is actually a year old – it's the GTX 980, but in notebooks. Until this point, there's been the GTX 980M, but the GTX 980 “proper,” we'll call it, hasn't been unleashed in notebooks in a fully unlocked form. We recently met with nVidia to get an in-person hands-on with the new GTX 980 notebooks and have first impressions, a full review is due once we've got laptops in-hand in the next week or so.

It's fitting that, following our giant post about AMD's recent downturn, I'd encounter my old ATi X800 Pro video card. I'd owned machines equipped with VGAs before this one, but this was the first standalone video card I ever bought. The model I purchased came equipped with a massive, top-of-the-line 256MB of GDDR3, a memory technology that ATi – independent of AMD at this time – had recently introduced.

The ATi Radeon X800 Pro used ATi's R420 GPU and was released in 2004, shipping in 256MB and 512MB capacities. For those complaining about the current stagnation on the 28nm process node, this GPU sat on 130nm process. Massive in comparison to today.

One video card to the next. We just reviewed MSI's R9 390X Gaming 8GB card at the mid-to-high range, the A10-7870K APU at the low-end, and now we're moving on to nVidia's newest product: The GeForce GTX 950.

NVidia's new GTX 950 is priced at $160, but scales up to $180 for some pre-overclocked models. The ASUS Strix GTX 950 that we received for testing is a $170 unit. These prices, then, land the GTX 950 in an awkward bracket; the GTX 750 Ti holds the budget class firmly below it and the R9 380 & GTX 960 hold the mid-range market above it.

The new GeForce GTX 950 graphics card hosts Maxwell architecture – the same GM206 found in the GTX 960 – and hosts 2GB of GDDR5 memory on a 128-bit interface. More on that momentarily. The big marketing point for nVidia has been reduced input latency for MOBA games, something that's being pushed through GeForce Experience (GFE) in the immediate future.

This review benchmarks nVidia's new GeForce GTX 950 graphics card in the Witcher 3, GTA V, and other games, ranking it against the R9 380, GTX 960, 750 Ti, and others.

The hardware industry has been spitting out launches at a rate difficult to follow. Over the last few months, we've reviewed the GTX 980 Ti Hybrid (which won Editor's Choice & Best of Bench awards), the R9 Fury X, the R9 390 & 380, an A10-7870K APU, and Intel's i7-6700K.

We've returned to the world of graphics to look at MSI's take on the AMD Radeon R9 390X, part of the R300 series of refreshed GPUs. The R300 series has adapted existing R200 architecture to the modern era, filling some of the market gap while AMD levies its Fiji platform. R300 video cards are purely targeted at gaming at an affordable price-point, something AMD has clung to for a number of years at this point.

This review of AMD's Radeon R9 390X benchmarks the MSI “Gaming” brand of the card, measuring FPS in the Witcher 3 & more, alongside power and thermal metrics. The MSI Radeon R9 390X Gaming 8G is priced at $430. This video card was provided by iBUYPOWER as a loaner for independent review.

Despite an ongoing period of general growth for the tech sector and desktop computing space, Jon Peddie Research today released a report indicating an 11% decline last quarter's GPU shipments.

The report indicates that embedded GPUs and IGPs are eroding dGPU sales. Year-to-year total GPU shipments fell 18.8%, combining the 21.7% desktop graphics decline with a 16.9% notebook dGPU decline. Note that this report represents the entire dedicated graphics industry and is not a linear representation of the gaming-only market, to which this website caters more directly.

The Fury X has been a challenging video card to review. This is AMD's best attempt at competition and, as it so happens, the card includes two items of critical importance: A new GPU architecture and the world's first implementation of high-bandwidth memory.

Some system builders may recall AMD's HD 4870, a video card that was once a quickly-recommended solution for mid-to-high range builds. The 4870 was the world's first graphics card to incorporate the high-speed GDDR5 memory solution, reinforcing AMD's position of technological jaunts in the memory field. Prior to the AMD acquisition, graphics manufacturer ATI designed the GDDR3 memory that ended up being used all the way through to GDDR5 (GDDR4 had a lifecycle of less than a year, more or less, but was also first instituted on ATI devices).

Our R9 Fury X analysis is still forthcoming, but we interrupted other tests to quickly analyze driver performance between the pre-release press drivers and launch day consumer drivers.

All testing was conducted using a retail Fury X, as we were unable to obtain press sampling. This benchmark specifically tests performance of the R9 Fury X using the B8, B9, and release (15.15.1004) drivers against one another.

The purpose for this test is to demystify some rumors that the Fury X would exhibit improved performance with the launch day drivers (15.15.1004), with some speculation indicating that the press drivers were less performant.


AMD's most recent video card launch was September of 2014, introducing the R9 285 ($243) on the slightly updated Tonga GPU. Tonga was laterally imposed to take the place of the Tahiti products, namely the HD 7970 and its refresh, the R9 280. The Radeon 7970 video card shipped in late 2011 on the Tahiti GPU, a die using TSMC's still-fabbed 28nm process, and was refreshed as the R9 280, then updated, improved, and refreshed again as the Tonga-equipped R9 285. At its core, the 285 would offer effectively identical on-paper specs (with some changes, like a 256-bit memory bus against the 384-bit predecessor), but introduced a suite of optimization that yielded marginally improved performance over the R9 280.

All of this is to say that it's been a number of years since AMD has introduced truly new architecture. Tahiti's been around four years now, Hawaii shipped in 2013 and was a node refresh of Tahiti (more CUs, ROPs, and geometry / rasterizer processors), and Fiji – the anticipated new GPU – won't ship for a short bit longer. Filling that space is another refresher line, the Radeon 300 series of video cards.

AMD's lull in technological advancement on the hardware side has allowed competitor nVidia to increase competition in some unchallenged market segments, like the high-end with the GTX 980 Ti ($650) and mid-range with the GTX 960 ($200). The long-awaited R9 300 series video cards have finally arrived, though, and while they aren't hosting new GPUs or deploying a smaller fab process, the cards do offer marginally increased clockrates and other small changes.

This review benchmarks the AMD R9 390 and AMD R9 380 graphics cards against the preceding R9 280, R9 290(X), GTX 960, and other devices. The R7 370 and R7 360 also launch today, but won't be reviewed here.

Page 1 of 9

  VigLink badge