Hardware Guides

Best Gaming Video Cards Under $200 (2016)

By Published November 15, 2016 at 2:23 pm

So begin our buyer's guides for the season. The first of our Black Friday & holiday buyer's guides is focusing on the top video cards under $200, highlighting ideal graphics cards for 1080p gaming. We've reviewed each of the GPUs used in these video cards, and are able to use that benchmark data to determine top performers for the dollar.

This generation's releases offer, in order of ascending MSRP, the RX 460 ($100), GTX 1050 ($110), GTX 1050 Ti ($140), RX 470 ($170), RX 480 4GB ($200), and GTX 1060 3GB ($200). A few active sales offer rebates and discounts that drop a few noteworthy cards, like the 4GB RX 480 and 3GB GTX 1060, down to below MSRP. The same is true for at least one RX 470.

As we've drawn a clear price line between each of the major GPUs that presently exists in this segment, we're making it a point to specifically highlight cards that are discounted or higher performance per dollar. This is a quick reference guide for graphics cards under $200; for the full details and all the caveats, always refer back to our reviews.

Virtual reality has begun its charge to drive technological development for the immediate future. For better or worse, we've seen the backpacks, the new wireless tether agents, the "VR cases," the VR 5.25" panels -- it's all VR, all day. We still believe that, although the technology is ready, game development has a way to travel yet -- but now is the time to start thinking about how VR works.

NVIDIA's Tom Petersen, Director of Technical Marketing, recently joined GamersNexus to discuss the virtual reality pipeline and the VR equivalent to frametimes, stutters, and tearing. Petersen explained that a "warp miss" or "drop frame" (both unfinalized terminology) are responsible for an unpleasant experience in VR, but that the consequences are far worse for stutters given the biology involved in VR.

In the video below, we talk with Petersen about the VR pipeline and its equivalencies to a traditional game refresh pipeline. Excerpts and quotations are below.


EVGA VRM Test Planning: New Thermocouples

By Published November 10, 2016 at 8:30 am

We're working on finalizing our validation of the EVGA VRM concerns that arose recently, addressed by the company with the introduction of a new VBIOS and optional thermal pad solution. We tested each of these updates in our previous content piece, showing a marked improvement from the more aggressive fan speed curve.

Now, that stated, we still wanted to dig deeper. Our initial testing did apply one thermocouple to the VRM area of the video card, but we weren't satisfied with the application of that probe. It was enough to validate our imaging results, which were built around validating Tom's Hardware DE's results, but we needed to isolate a few variables to learn more about EVGA's VRM.

This tutorial walks through the process of installing EVGA's thermal pad mod kit on GTX 1080 FTW, 1070 FTW, and non-FTW cards of similar PCB design. Our first article on EVGA's MOSFET and VRM temperatures can be found here, but we more recently posted thermographic imaging and testing data pertaining to EVGA's solution to its VRM problems. If you're out of the loop, start with that content, then come back here for a tutorial on applying EVGA's fix.

The thermal mod kit from EVGA includes two thermal pads, for which we have specified the dimensions below (width/height), a tube of thermal compound, and some instructions. That kit is provided free to affected EVGA customers, but you could also buy your own thermal pads (~$7) of comparable size if EVGA cannot fulfill a request.

We've had enough suggestions lately to revisit older hardware that we thought it was time. The GTX 770 2GB cards first shipped in May of 2013, marking the GPU now three years old, and launched at a $400 price-point. That makes the GTX 1070 the most linear upgrade -- it's a direct path in nomenclature and in price, also around $400 -- but it's not alone in this market. The RX 480 assuredly outperforms the GTX 770, as does the GTX 1060. More curious, though, is the once mighty GTX 770's performance in relation to the GTX 1050, RX 460, and 1050 Ti, all of which can be had below $140.

It's probably about time for an upgrade for GTX 770 owners. Don't get us wrong: The GTX 770 2GB can still hold its ground just fine, but only with the assistance of settings reductions when playing modern AAA titles. Even for "just" 1080p performance, the likes of Ultra and High aren't necessarily feasible in games like Battlefield 1.

Resolution is a worthwhile side-point, too. Last time we talked about the GTX 770 in depth, 1080p was really the only resolution worth considering from a review standpoint. We certainly didn't have 4K monitors in the lab yet, and 1440p was still only a small fraction of the market. With 1920x1080 holding more than 80% of the gaming market today, it's easy to believe that the share was even greater in 2013.

Things are changing, though, and the industry is evolving. We talked about this in our GTX 1060 and RX 480 reviews, both devices that are capable of 1440p gaming with relatively high graphics settings. Considering the price of each card, around $240-$250 for the bottom line devices, that's a major accomplishment for this year's GPU architectures.

Part 1 of our interview with AMD's RTG SVP & Chief Architect went live earlier this week, where Raja Koduri talked about shader intrinsic functions that eliminate abstraction layers between hardware and software. In this second and final part of our discussion, we continue on the subject of hardware advancements and limitations of Moore's law, the burden on software to optimize performance to meet hardware capabilities, and GPUOpen.

The conversation started with GPUOpen and new, low-level APIs – DirectX 12 and Vulkan, mainly – which were a key point of discussion during our recent Battlefield 1 benchmark. Koduri emphasized that these low-overhead APIs kick-started an internal effort to open the black box that is the GPU, and begin the process of removing “black magic” (read: abstraction layers) from the game-to-GPU pipeline. The effort was spearheaded by Mantle, now subsumed by Vulkan, and has continued through GPUOpen.

We've already extensively looked at the GTX 1060 3GB vs. GTX 1060 6GB buying options, we covered the RX 480 4GB vs. 8GB options, but we haven't yet tested the 3GB & 4GB SKUs head-to-head. In this content, we're using the latest drivers to specifically benchmark the GTX 1060 3GB versus the RX 480 4GB cards to determine which has the best framerate for the price.

Each of the lower VRAM spec SKUs has a few other tweaks in addition to its memory capacity reduction. The GTX 1060 3GB, for instance, also eliminates one of its SMs. In turn, that kills 128 CUDA cores and 8 TMUs, dragging the 1060 down from 1280 cores / 80 TMUs to 1152 cores / 72 TMUs on the GTX 1060 3GB model. AMD's RX 480 4GB card, meanwhile, has a lower minimum specification for memory to assist in cost management. The RX 480 4GB has a minimum memory speed of ~1750MHz (or ~7Gbps effective), whereas the RX 480 8GB model runs 2000MHz (8Gbps effective).

NZXT's new Kraken X42, X52, and X62 liquid coolers were announced today, all using the new Asetek Gen5 pump with substantial custom modifications. The most direct Gen5 competition would be from Corsair, makers of the H115i and H100iV2, each priced to compete with the Kraken X42 ($130) and X52. The Corsair units, however, are using an unmodified Asetek platform from top-to-bottom, aside from a couple of Corsair fans. NZXT's newest endeavor had its components dictated by NZXT, including a custom (and fairly complex) PCB for fan speed, pump speed, and RGB control, planted under a custom pump plate with infinity mirror finish. The unit has gone so far as to demand a double-elbow barb for pose-able tubes, rather than the out-the-top setup of the Asetek stock platform – that's some fastidious design.

As for how we know all of this, it's because we've already disassembled a unit. We decided to dismantle one of our test-complete models to learn about its internals, since we're still waiting for the X52 and X62 models to be review-ready. We've got a few more tests to run.

Before getting to the tear-down, let's run through the specs, price, and availability of NZXT's new Kraken X42, X52, and X62 closed-loop liquid coolers. 

Abstraction layers that sit between the game code and hardware create transactional overhead that worsens software performance on CPUs and GPUs. This has been a major discussion point as DirectX 12 and Vulkan have rolled-out to the market, particularly with DOOM's successful implementation. Long-standing API incumbent Dx 11 sits unmoving between the game engine and the hardware, preventing developers from leveraging specific system resources to efficiently execute game functions or rendering.

Contrary to this, it is possible, for example, to optimize tessellation performance by making explicit changes in how its execution is handled on Pascal, Polaris, Maxwell, or Hawaii architectures. A developer could accelerate performance by directly commanding the GPU to execute code on a reserved set of compute units, or could leverage asynchronous shaders to process render tasks without getting “stuck” behind other instructions in the pipeline. This can't be done with higher level APIs like Dx 11, but DirectX 12 and Vulkan both allow this lower-level hardware access; you may have seen this referred to as “direct to metal,” or “programming to the metal.” These phrases reference that explicit hardware access, and have historically been used to describe what Xbox and Playstation consoles enable for developers. It wasn't until recently that this level of support came to PC.

In our recent return trip to California (see also: Corsair validation lab tour), we visited AMD's offices to discuss shader intrinsic functions and performance acceleration on GPUs by leveraging low-level APIs.

We toured Corsair's new offices about a year ago, where we briefly posted about some of the validation facilities and the then-new logo. Now, with the offices fully populated, we're revisiting to talk wind tunnels, thermal chambers, and test vehicles for CPU coolers and fans. Corsair Thermal Engineer Bobby Kinstle walks us through the test processes for determining on-box specs, explaining hundreds of thousands of dollars worth of validation equipment along the way.

This relates to some of our previous content, where we got time with a local thermal chamber to validate our own methodology. You might also be interested to learn about when and why we use delta values for cooler efficacy measurements, and why we sometimes go with straight diode temperatures (like thermal limits on GPUs).

Video here (posting remotely -- can't embed): https://www.youtube.com/watch?v=Mf1uI2-I05o

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge