Regardless of how its mechanics pan-out, Star Citizen is slated to claim the throne as one of the most graphically intense PC games in recent history. This is something we discussed with CIG's Chris Roberts back when the Kickstarter was still running, diving into the graphics technology and the team's intent to fully utilize all tools available to them.

We've been trying to perform frequent benchmarks of Star Citizen as the game progresses. This progress monitor comes with a massive disclaimer, though, and is something we'll revisit shortly: The game isn't finished.

The recent launch of the GTX 980 Ti, R9 Fury X, and AMD 300 series cards almost demands a revisit to Star Citizen's video card performance. This graphics benchmark looks at GPU performance in Star Citizen's 1.1.3 build, testing framerates at various settings and resolutions.

Following our initial review of AMD's new R9 390 ($330) and R9 380 ($220) video cards, we took the final opportunity prior to loaner returns to overclock the devices. Overclocking the AMD 300 series graphics cards is a slightly different experience from nVidia overclocking, but remains methodologically the same in approach: We tune the clockrate, power, and memory speeds, then test for stability.

The R9 390 and R9 380 are already pushed pretty close to their limits. The architectural refresh added about 50MHz to the operating frequency of each card, with some power changes and memory clock changes tacked-on. The end result is that the GPU is nearly maxed-out as it is, but there's still a small amount of room for overclocking play. This overclocking guide and benchmark for the R9 390 & R9 380 looks at the maximum clockrate achievable through tweaking.

All these tests were performed with Sapphire's “Nitro” series of AMD 300 cards, specifically using the Sapphire Nitro R9 390 Tri-X and Sapphire Nitro R9 380 Dual-X cards. Results will be different for other hardware.

Working with the GTX 980 Ti ($650) proved that nVidia could supplant its own device for lower cost, limiting the use cases of the Titan X primarily to those with excessive memory requirements.

In our GTX 980 Ti overclocking endeavors, it was quickly discovered that the card encountered thermal bounds at higher clockrates. Driver failures and device instability were exhibited at frequencies exceeding ~1444MHz, and although a 40% boost in clockrate is admirable, it's not what we wanted. The outcome of our modest overclocking effort was an approximate ~19% performance gain (measured in FPS) for a selection of our benchmark titles, enough to propel the 980 Ti beyond the Titan X in gaming performance. Most games cared more about raw clock speed of the lower CUDA-count 980 Ti than the memory capacity of the TiX.

Our initial review of the $650 GTX 980 Ti, published just over twelve hours prior to this post, mentioned an additional posting focusing on the card's overclocking headroom. The GTX 980 Ti runs GM200, the same GPU found in nVidia's Titan X video card, and is driven by Maxwell's new overclocking ruleset.

Maxwell, as we've written in a how-to guide before, overclocks differently from other architectures. NVidia's newest design institutes a power percent target (“Power % Target”) that increments power provisioning to the die to grant OC headroom. Unfortunately, this metric can't be exceeded beyond what the BIOS natively allows (without a hack, anyway), and means that we're sharing watts between the core clock, memory clock, and voltage increase. Overclocking on Maxwell offers some granularity without making things too complicated, though it's not until we get hands-on with board partner video cards that we'll know the true OC ceiling of the 980 Ti.

This post showcases our GTX 980 Ti initial overclock on the reference cooler, yielding a considerable framerate gain in game benchmarks.

Following unrelenting rumors pertaining to its pricing and existence, nVidia's GTX 980 Ti is now an officially-announced product and will be available in the immediate future. The GTX 980 Ti was assigned an intensely competitive $650 price-point, planting the device firmly in a position to usurp the 780 Ti's positioning in nVidia's stack.

The 980 Ti redeploys the GTX 980's “The World's Most Advanced GPU” marketing language, a careful indication of single-GPU performance against price-adjacent dual GPU solutions. This video card takes the market positioning of the original GTX 780 Ti Kepler device in the vertical, resulting in the following bottom-up stack:

Until Pascal arrives, nVidia is sticking with its maturing Maxwell architecture. The GTX 980 Ti uses the same memory subsystem and compression technology as previous Maxwell devices.

This GTX 980 Ti review benchmarks the video card's performance against the GTX 980, Titan X, 780 Ti, 290X, and other devices, analyzing FPS output across our suite of test bench titles. Among others tested, the Witcher 3, GTA V, and Metro: Last Light all make a presence.

During the GTA V craze, we posted a texture resolution comparison that showcased the drastic change in game visuals from texture settings. The GTA content also revealed VRAM consumption and the effectively non-existent impact on framerates by the texture setting. The Witcher 3 has a similar “texture quality” setting in its game graphics options, something we briefly mentioned in our Witcher 3 GPU benchmark.

This Witcher 3 ($60) texture quality comparison shows screenshots with settings at Ultra, High, Normal, and Low using a 4K resolution. We also measured the maximum VRAM consumption for each setting in the game, hoping to determine whether VRAM-limited devices could benefit from dropping texture quality. Finally, in-game FPS was measured as a means to determine the “cost” of higher quality textures.

Benchmarking the Witcher 3 proved to be more cumbersome than any game we've ever benchmarked. CD Projekt Red's game doesn't front the tremendously overwhelming assortment of options that GTA V does – all of which we tested, by the way – but it was still a time-consuming piece of software to analyze. This is largely due to optimization issues across the board, but we'll dive into that momentarily.

In this Witcher 3 – Wild Hunt PC benchmark, we compare the FPS of graphics cards at varied settings (1080p, 1440p, 4K) to uncover achievable framerates. Among others, we tested SLI GTX 980s, a Titan X, GTX 960s, last-gen cards, and AMD's R9 290X, 285, and 270X. Game settings were tweaked in methodology for the most fair comparison (below), but primarily checked for FPS at 1080p (ultra, medium, low), 1440p (ultra, medium), and 4K (ultra, medium).

That's a big matrix.

Let's get started.

It's been a while since our last card-specific GTX 980 review – and that last one wasn't exactly glowing. Despite the depressing reality that 6 months is “old” in the world of computer hardware, the GTX 980 and its GM204 GPU have both remained near the top of single-GPU benchmarks. The only single-GPU – meaning one GPU on the card – AIC that's managed to outpace the GTX 980 is the Titan X, and that's $1000.

This review looks at PNY's GTX 980 XLR8 Pro ($570) video card, an ironclad-like AIC with pre-overclocked specs. Alongside the XLR8 Pro graphics card, we threw-in the reference GTX 980 (from nVidia) and MSI's Gaming 4G GTX 980 (from CyberPower) when benchmarking.

The performance disparity between same-architecture desktop and mobile GPUs has historically been comparable to multi-generational gaps in desktop components. Recent advancements by GPU manufacturers have closed the mobile performance gap to about 10% of the desktop counterparts, an impressive feat that results in low-TDP, highly performant laptops with longer battery life.

Battery life has long been a joke for gaming laptops. To yield gaming prowess of any measure, notebooks are normally affectionately named “desktop replacements” and never disconnected from the wall. As modern architectures have improved process nodes and reduced power requirements, it's finally become possible for gaming laptops to operate for a moderate amount of time on battery. Battery life is dictated by a few key points: Active power consumption of the components, thermal levels of the system and battery, and power efficiency at other locations in the stack (S0iX on CPUs, DevSleep with SSDs, for instance).

Following the comparatively bombastic launch of the HyperX Predator SSD, an M.2 SSD fitted to a PCI-e adapter, Kingston this week launched its “Savage” SATA SSD. The Savage SSD assumes the modern branding efforts fronted by HyperX, which has streamlined its product lineup into a hierarchical Fury, Savage, Beast/Predator suite. These efforts eliminate long-standing names like “Genesis” and “Blu,” replacing them with – although sometimes silly – names that are more cohesive in their branding initiative.

The new Savage SSD sees integration of the Phison PS3110-S10 controller, usurping the long-standing HyperX 3K SSD and its SandForce 2nd Gen controller from Kingston's mid-range hot-seat. HyperX's Savage operates on the aging SATA III interface; this ensures claustrophobic post-overhead transfer limitations that can't be bypassed without a faster interface, largely thanks to information transfer protocols that consume substantial bandwidth. 8b/10b encoding, for example, eats into the SATA III 6Gbps spec to the point of reducing its usable throughput to just 4.8Gbps (~600MB/s). This means that, at some point, the argument of SATA SSD selection based upon speed loses merit. Other aspects – endurance and encryption, for two easy ones – should be held in higher regard when conducting the pre-purchase research process.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge