Retailers and manufacturers are always happy to give consumers purchasing options: Spend an extra $30 and get buying insurance, another $50 and you get an extended warranty, spend untold thousands on a car to add Bluetooth, and in the case of video cards, an extra $20 and you get a “faster” card in the form of a pre-OC or “SuperClock.”
We’ve explained overclocking as it pertains to GPUs in the past, but never looked specifically at pre-overclocked or SuperClocked cards. The realistic intent of higher-clocked GPUs is to enable users who are either too busy/lazy to overclock, would prefer to have an expert do it for them, or who are legitimately unaware of or afraid of overclocking. Some of the high-end overclocking cards are binned-out with hotter chips (chips that can overclock higher), but not all SuperClocked and pre-overclocked cards are like this. Many of the available options are just overclocked versions of the stock card.
SSD benchmarks generally include two fundamental file I/O tests: Sequential and 4K random R/W. At a very top-level, sequential tests consist of large, individual files transfers (think: media files), which is more indicative of media consumption and large file rendering / compilation. 4K random tests employ thousands of files approximating 4KB in size each, generally producing results that are more indicative of what a user might experience in a Windows or application-heavy environment.
Theoretically, this would also be the test to which gamers should pay the most attention. A "pure gaming" environment (not using professional work applications) will be almost entirely exposed to small, random I/O requests generated within the host OS, games, and core applications. A particularly piratical gamer -- or just someone consuming large movie and audio files with great regularity -- would also find use in monitoring sequential I/O in benchmarks.
This article looks at a few things: What types of I/O requests do games spawn most heavily and what will make for the best gaming SSDs with this in mind? There are a few caveats here that we'll go through in a moment -- namely exactly how "noticeable" various SSDs will be in games when it comes to performance. We used tracing software to analyze input / output operations while playing five recent AAA titles and ended up with surprisingly varying results.
UPDATE: Clarified several instances of "file" vs. "I/O" usage.
This test was spawned out of a general lack of equipment in the GN lab. We've got a few monitors available for testing, but of the three best units (120Hz displays), only one natively operates at 1920x1080; the others -- fabled unicorns among monitors -- run at 2048x1152 and 1920x1200.
The 1920x1080 120Hz display isn't always available for our game GPU benchmarks, making it desirable to use one of the larger displays at a lower-than-native resolution (for consistent / comparable testing). In effort of honest benchmarking, we decided to double-check an existing suspicion that forcing lower-than-native resolutions would not negatively impact FPS or produce synthetic artifacts that do not exist at native resolutions.
The hypothesis says "nope, should be identical performance other than visual scaling." Let's see if running a monitor at a non-native resolution will negatively impact testing with artifacts or lower FPS.
The delay of Valve's Steam Machine (or Steam Box) has forced the hand of systems manufacturers. Alienware, Gigabyte with the Brix, and now Zotac have all begun shipping their would-have-been Steam Machines as DIY mini-PCs. Steam has disallowed the shipment of officially branded Steam Machines until the completion of its haptic controller, leaving system manufacturers scrambling to untie the resources dedicated to machines that were originally slated for a 2014 launch.
In an official capacity, Gigabyte's BRIX Pro and Zotac's EN760 are not "Steam Machines" -- at least, not by branding -- but they might as well be. The EN760 (Newegg page) ships in two models: The EN760 and EN760 Plus. The base model ships without RAM or permanent storage at $540; the Plus edition includes a single 8GB stick of 1600MHz RAM and 1x1TB 5400RPM HDD. Both units are outfitted with an 860M mobile GPU, i5-4200U mobile CPU, and custom board design to fit in a 7.4" x 7.4" x 2" (188 x 188 x 51mm) shell.
Cars have always been a beacon for visual FX presentations. This is evidenced by nVidia's obsession with real-time ray-tracing in every demonstration the company has ever fronted; and AMD isn't much better off -- their multi-GPU solutions almost always have some vehicle showcase. Cars are somewhat easy to grasp as a visual marvel for just about any onlooker, especially investors and non-gamers, so it makes sense.
There's no argument that RAM has become commoditized in the marketplace. This has been reinforced by furthered emphasis on appearances and the prevalence of high-capacity modules at relatively stabilized prices. DDR3 DRAM fabrication has also improved its yield steadily through the years, making high-frequency memory more abundant than ever.
As it turns out, RAM also feels like a relatively uninteresting component when selecting parts for a new system -- such is the nature of a stable product. It's similar to buying gas, in that regard; serious enthusiasts might deliberate over suppliers and octane specifications, but most users just fill up with the most convenient and affordable source. That's not to diminish the importance of quality RAM, though it does currently feel like a fairly stagnated market. Things will change in the face of DDR4.
The Watch Dogs launch has been a worrisome one for PC hardware enthusiasts. We've heard tale of shockingly low framerates and poor optimization since Watch Dogs was leaked ahead of shipment, but without official driver support from AMD and limited support from nVidia, it was too early to call performance. Until this morning.
At launch, AMD pushed out its 14.6 beta drivers alongside nVidia's 337.88 beta drivers. Each promised performance gains in excess of 25% for specific GPU / Watch Dogs configurations. As we did with Titanfall, I've run Watch Dogs through our full suite of GPUs and two CPUs (for parity) to determine which video cards are best for different use cases of the game. It's never as clear-cut as "buy this card because it performs best," so we've broken-down how different cards perform on various settings.
In this Watch Dogs PC video card & CPU benchmark, we look at the FPS of the GTX 780 Ti, 770, 750 Ti, R9 290X, R9 270X, R7 250X, and HD 7850; to determine whether Watch Dogs is bottlenecked on a CPU level, I also tested an i5-3570K CPU vs. a more modern i7-4770K CPU. The benchmark charts below give a good idea of what video cards are required to produce playable framerates at Ultra/Max, High, and Medium settings.
Update: Our very critical review of Watch Dogs is now online here.
Titanfall's official launch brings us back to the topic of video card performance in the Source Engine-based game. When we originally benchmarked how various video cards performed in Titanfall, we clearly noted that the pre-release state of the game and lack of official driver support likely contributed to SLI microstuttering, CrossFire catastrophic failure, and overall odd performance. We're now back with a full report using the latest beta drivers (with Titanfall profiles and support) and the full version of the game.
In this Titanfall PC video card benchmark, we look at the FPS of the GTX 760, GTX 650 Ti Boost, GTX 750, R9 270X, R7 260X, 7850, the A10-5800K 7660D APU, and Intel's HD4000. I threw a GTX 580 in there for fun. Our thanks to MSI for providing the 750, 260X, and 270X for these tests.
Memory has a tendency to get largely overlooked when building a new system. Capacity and frequency steal the show, but beyond that, it's largely treated as a check-the-reviews component. Still, a few guidelines exist like not mixing-and-matching kits and purchasing strictly in pairs of two where dual-channel is applicable. These rules make sense, especially to those of us who've been building systems for a decade or more: Mixing kits was a surefire way to encounter stability or compatibility issues in the past (and is still questionable - I don't recommend it), and as for dual-channel, no one wanted to cut their speeds in half.
When we visited MSI in California during our 2013 visit (when we also showed how RAM is made), they showed us several high-end laptops that all featured a single stick of memory. I questioned this choice since, surely, it made more sense to use 2x4GB rather than 1x8GB from a performance standpoint. The MSI reps noted that "in [their] testing, there was almost no difference between dual-channel performance and normal performance." I tested this upon return home (results published in that MSI link) and found that, from a very quick test, they looked to be right. I never got to definitively prove where / if dual-channel would be sorely missed, though I did hypothesize that it'd be in video encoding and rendering.
In this benchmark, we'll look at dual-channel vs. single-channel platform performance for Adobe Premiere, gaming, video encoding, transcoding, number crunching, and daily use. The aim is to debunk or confirm a few myths about computer memory. I've seen a lot of forums touting (without supporting data) that dual-channel is somehow completely useless, and to the same tune, we've seen similar counter-arguments that buying anything less than 2 sticks of RAM is foolish. Both have merits.
Reviewing a specific type of product with great repetition often gets boring -- especially when we've already seen the best-of-the-best for the current generation. We see a lot of the same, rehashed ideas when looking at cases and a lot of the same suppliers when it comes to CPU heatsinks. Thankfully, every now and then we see truly innovative advancements in each product line, often serving as welcomed reminders of why all these tests are fun and worthwhile.
We looked at NZXT's H440 back at CES 2014, where the company showcased their new enclosure in a top secret suite at Circus-Circus; after the show concluded, we ran a "best gaming cases of CES 2014" article that proclaimed the H440 to be "an innovator" in the space. So, if it's not clear, I've been excited to finally test this enclosure and see how it feels to build with and benchmark.
In this NZXT H440 case benchmark & review, we look at what has rapidly become our favorite mid-tower ATX gaming enclosure on the PC market. First, the video review:
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.