Consoles have long touted the phrase “close to the metal” as a means to explain that game developers have fewer software-side obstacles between their application and the hardware. One of the largest obstacles and enablers faced by PC gaming has been DirectX, an API that enables wide-sweeping compatibility (and better backwards compatibility), but also throttles performance with its tremendous overhead. Mantle, an effort of debatable value, first marketed itself as a replacement for Dx11, proclaiming DirectX to be dead. Its primary advantage was along the lines of console development: Removing overhead to allow greater software-hardware performance. Then DirectX 12 showed up.
DirectX is a Microsoft API that has been a dominant programming interface for games for years. Mantle 1.0 is AMD's abandoned API and is being deprecated as developers shift to adopt Dx12. The remnants of Mantle's codebase are being adapted into OpenGL, a graphics API that asserts minimal dominance in the desktop market.
About a year ago, we published a piece notifying our readers of hoax HDMI-to-VGA passive cables proclaiming that they did absolutely nothing for the buyer; we called them “fake,” indicating that a passive cable is electrically incapable of transforming a signal, and therefore could not serve as a digital-to-analog adapter without some sort of active conversion taking place. There are a few hardware-side exceptions, but they are rare.
It was in this same content that we mentioned “SATA III cables” vs. “SATA II cables,” noting that the two cables were functionally identical; the transfer rates are the same between a “SATA III” cable and a “SATA II” cable. The difference, as defined by the official SATA specification, is a lock-in clip to ensure unshaken contact. Upon being taken viral by LifeHacker, statement of this simple fact was met with a somewhat disheartening amount of resistance from an audience we don't usually cater toward. Today, we had enough spare time to reinforce our statements with objective benchmarking.
In town for GTC, we decided to stop-over in Fremont, California to tour Corsair's new US headquarters. The company moved to its new location in November and has only just begun filling the entire space, but critical business components were in full operation during our visit. Among these components are the various test and engineering labs, which provided a hands-off look at some of the test equipment deployed by the memory giant and cooling manufacturer.
Jumping straight into equipment discussion feels unfair, though – that slider image deserves demystifying. Looming above is Corsair's new logo, spotted just outside of the building before our tour. The logo is only slightly varied from the company's current sails logo, introducing harder edges for a more 'modern' design. This change comes after an unbelievably polarizing debate among gamers pertaining to the unveil of Corsair's “gaming forged” logo, a crossed scimitar design intended for some peripherals.
Stepping into Valve’s full-room virtual reality experience resulted in a nervous excitement that's rare to come by. Seated quietly in the center of the room, HTC’s “Vive” HMD, a pair of controllers, and a headset all awaited my arrival.
Our initial review of nVidia's new GTX 960 looked at ASUS' Strix model of the card, a $210 unit with a custom cooler and an emphasis on silence. We declared the GTX 960 a formidable competitor at the price range, remarking that its software-side support and power made it a primary choice for 1080p gaming. AMD's closest competitor is the R9 280 – a powerful alternative for users who don't mind a bit higher TDP and less frequent driver updates – priced closer to $170 after rebates.
As nVidia continues to push SLI as an actionable configuration, the question of SLI compatibility with video games is raised once again. Not all games adequately support SLI and, for this reason, we've historically recommend a single, more powerful GPU in opposition to two mid-range options in SLI.
Part of our daily activities include extensive graphics benchmarking of various video cards and games, often including configuration, OC, and performance tweaks. As part of these benchmarks, we publish tables comparing FPS for the most popular graphics cards, ultimately assisting in determining what the true requirements are for gaming at a high FPS.
Although our test methodology includes extra steps to ensure an isolated, clean operating environment for benchmarking, the basics of testing can be executed on everyday gaming systems. This article explains how to benchmark your graphics card, framerate (FPS), and video games to determine whether your PC can play a game. Note that we've simplified our methodology for implementation outside of a more professional environment.
Recently, the monitor industry has amusingly reminded me of laundry detergent. It seems like everybody is coming out with detergents that are four times as potent, and the monitor industry isn't too different in its marketing language. With the rising popularity of 4K, it's just a matter of time until the norm is to have a monitor with four times as many pixels as a 1080p screen.
The normalization of 4k monitors is certainly very exciting, but current-gen GPUs still struggle with playing games at such a high resolution. Similarly, prices for 4K monitors may be dropping, but are still high for the average gamer. Luckily, 2560x1440 screens are a reasonable compromise between performance, pixels, and price.
This round-up looks at some of the best 1440p displays on the market, particularly with a focus on gaming needs.
Save CPUs, all components manufacturing in the PC hardware industry is centered upon the same core philosophy: Design a PCB, design the aesthetics and/or heatsink, and then purchase the semiconductor or Flash supply and build a product. In the case of video cards, board partners are responsible for designing aftermarket coolers (and PCBs, if straying from reference), but purchase the GPU itself from AMD or nVidia. The “hard work” is done by the GPU engineers and fabrication plants, but that's not to trivialize the thermal engineering that board partners invest into coolers.
When our readers ask us which version of a particular video card is “best,” we have to take into account several use-case factors and objective design factors. Fully passive cooling solutions may be best for gaming HTPCs like this one, but can't be deployed for higher-TDP graphics hardware. That's where various aftermarket designs come into play, each prioritizing noise, dissipation, and flair to varying degrees.
Our coverage of last year's best PC enclosures has remained some of our most popular content to date, and as is CES tradition, we're updating the coverage for 2015. The previous years have gone through trends of mini-ITX / SFF boxes (the Steam Box craze, now dying down) and larger, enthusiast-priced boxes. This year's CES trends saw a lull from major case manufacturers like Corsair, Cooler Master (reeling from a lawsuit by Asetek), and NZXT, but welcomed budget-friendly enclosures and high-end works of art. Users seeking more mid-range enclosures will be left waiting a while longer, it seems.
As a part of our new website design – pending completion before CES – we've set forth on a mission to define several aspects of GPU technology with greater specificity than we've done previously. One of these aspects is texture fill-rate (or filter rate) and the role of the TMU, or Texture Mapping Units.
When listing GPU specifications, we often enumerate the clockrate and TMU count, among other specs. These two items are directly related to one another, each used to extrapolate the “texture filter rate” of the GPU. The terms “Texture Fill-Rate” and “Texture Filter Rate” can be used interchangeably. For demonstration purposes, here is a specifications table for the GTX 980 (just because it's recent):