Ubisoft launched all its AAA titles in one go for the holiday season, it seems. Only days after the buggy launch of Assassin's Creed Unity ($60) – a game we found to use nearly 4GB VRAM in GPU benchmarking – the company pushed Far Cry 4 ($60) into retail channels. Ubisoft continued its partnership with nVidia into Far Cry 4, featuring inclusion of soft shadows, HBAO+, fine-tuned god rays and lighting FX, and other GameWorks-enabled technologies. Perhaps in tow of this partnership, we found AMD cards suffered substantially with Far Cry 4 on PC.
Our Far Cry 4 GPU FPS benchmark analyzes the best video cards for playing Far Cry 4 at max (Ultra) settings. We tested lower settings for optimization on more modest GPU configurations. Our tests benchmarked framerates on the GTX 980 vs. GTX 780 Ti, 770, R9 290X, 270X, 7850, and more. RAM and VRAM consumption were both monitored during playtests, with CPU bottlenecking discovered on some configurations.
Update: For those interested in playing Far Cry 4 near max settings, we just put together this PC build guide for a DIY FC4 PC.
In a somewhat promising turn for the industry, Assassin's Creed Unity ($60) uses almost all of the VRAM we were able to throw at it. We'll get into that shortly. Regardless of the game's mechanics and value (reviewed here), there's no arguing that Assassin's Creed Unity has some of the most graphically-impressive visuals ever produced for a PC game. In coordination with nVidia and its GameWorks suite (detailed), Ubisoft implemented new Percentage Closer Soft Shadows, TXAA, and ShadowWorks technology to soften and blur lines between cast shadows. Not all graphics technologies require nVidia video cards.
In this GPU benchmark, we look at the best video cards for Assassin's Creed Unity for max (Ultra High) settings and other settings; our test pits the GTX 980 vs. GTX 780 Ti, 770, R9 290X, 270X, and more. Low settings tests are also included. Further, we checked RAM and VRAM consumption while playing ACU, hoping to further determine the game's most demanded resource.
As we tend to do with new game releases – GRID, Titanfall, and Watch_Dogs included – we decided to put Borderlands: The Pre-Sequel through its performance paces. We originally spoke about Borderlands: The Pre-Sequel at PAX, where we got hands-on with the game and discussed gameplay mechanics. Since then, the title has shipped at the now-normal $60 price-tag, complete with the usual nVidia partnership and a basis on Unreal Engine.
FRAPS has been in the gameplay capture business for over a decade now, inarguably serving as the best solution for early gameplay video recordings. The advent of casual streaming and competitive eSports has finally pushed recorded game content to widespread consumption. ISP-provided datarates have mostly stabilized to usable levels, which helps in production and consumption of high bit-rate content.
ShadowPlay was announced as a FRAPS alternative last year by nVidia, and is only compatible with nVidia devices. The tool uses an integrated H.264 video encoder on Kepler and Maxwell hardware, ensuring most the performance drag is loaded on the GPU rather than the CPU; moreover, it's being loaded on specific components of the GPU that are built for video encoding and largely unused while gaming.
FRAPS does no live encoding and only records raw data output, which theoretically means it will have the best quality (lossless), but also demands the most resources in storage and CPU cycles.
Cars have always been a beacon for visual FX presentations. This is evidenced by nVidia's obsession with real-time ray-tracing in every demonstration the company has ever fronted; and AMD isn't much better off -- their multi-GPU solutions almost always have some vehicle showcase. Cars are somewhat easy to grasp as a visual marvel for just about any onlooker, especially investors and non-gamers, so it makes sense.
The Watch Dogs launch has been a worrisome one for PC hardware enthusiasts. We've heard tale of shockingly low framerates and poor optimization since Watch Dogs was leaked ahead of shipment, but without official driver support from AMD and limited support from nVidia, it was too early to call performance. Until this morning.
At launch, AMD pushed out its 14.6 beta drivers alongside nVidia's 337.88 beta drivers. Each promised performance gains in excess of 25% for specific GPU / Watch Dogs configurations. As we did with Titanfall, I've run Watch Dogs through our full suite of GPUs and two CPUs (for parity) to determine which video cards are best for different use cases of the game. It's never as clear-cut as "buy this card because it performs best," so we've broken-down how different cards perform on various settings.
In this Watch Dogs PC video card & CPU benchmark, we look at the FPS of the GTX 780 Ti, 770, 750 Ti, R9 290X, R9 270X, R7 250X, and HD 7850; to determine whether Watch Dogs is bottlenecked on a CPU level, I also tested an i5-3570K CPU vs. a more modern i7-4770K CPU. The benchmark charts below give a good idea of what video cards are required to produce playable framerates at Ultra/Max, High, and Medium settings.
Update: Our very critical review of Watch Dogs is now online here.
Titanfall's official launch brings us back to the topic of video card performance in the Source Engine-based game. When we originally benchmarked how various video cards performed in Titanfall, we clearly noted that the pre-release state of the game and lack of official driver support likely contributed to SLI microstuttering, CrossFire catastrophic failure, and overall odd performance. We're now back with a full report using the latest beta drivers (with Titanfall profiles and support) and the full version of the game.
In this Titanfall PC video card benchmark, we look at the FPS of the GTX 760, GTX 650 Ti Boost, GTX 750, R9 270X, R7 260X, 7850, the A10-5800K 7660D APU, and Intel's HD4000. I threw a GTX 580 in there for fun. Our thanks to MSI for providing the 750, 260X, and 270X for these tests.