One of our most commonly received Ask GN questions is “which video card manufacturer is 'the best?'” (scare quotes added). The truth of the matter is, as we've often said, they're all similar in the most critical matter – the GPU is the same. If MSI sells an R9 380X and PowerColor sells an R9 380X, they're both using the same GPU (Tonga) and silicon; core performance will be nearly identical. The same is true for the GTX cards – EVGA and PNY both sell GTX 960 video cards, and all of their models implement the same GM206 GPU. The differences are generally rooted in pre-overclocking, cooling units, support and warranties, and aesthetics.

All our content combined, we've spent hours and tens of thousands of words talking about which video cards perform the best in various categories. That's great -- but sometimes it's fun to do something different. This video allows each GPU manufacturer one minute to explain who makes the best graphics cards for gaming. It's a speed-round, to be sure.

We spoke exclusively with the Creative Assembly team about its game engine optimization for the upcoming Total War: Warhammer. Major moves to optimize and refactor the game engine include DirectX 12 integration, better CPU thread management (decoupling the logic and render threads), and GPU-assigned processing to lighten the CPU load.

The interview with Al Bickham, Studio Communications Manager at Creative Assembly, can be found in its entirety below. We hope to soon visit the topic of DirectX 12 support within the Total War: Warhammer engine.

5MB of storage once required 50 spinning platters and a dedicated computer, demanding a 16 square-foot area for its residence. The first hard drive wasn't particularly fast at 1200RPM and with seek latencies through the roof (imagine a header seeking between 50 platters) – but it was the most advanced storage of the time.

That device was the IBM 305 RAMAC, its converted cost was a $30,000 monthly lease, and single instruction execution required between 30ms and 50ms (IRW phases). The IBM 305 RAMAC did roughly 100,000 bits per second, or 0.0125MB/s. Today, the average 128GB microSD card costs ~$50 – one time – and executes read/write instructions at 671,000,000 bits per second, or 80MB/s. And this is one of our slowest forms of Flash storage. The microSD card is roughly the size of a fingernail (32x24x2.1mm), and filling a 16 square-foot area with them would yield terabytes upon terabytes of storage.

chm-ramac-tall-1

The 305 RAMAC was a creation of 1956. Following last week's GTC conference, we had the opportunity to see the RAMAC and other early computing creations at the Computer History Museum in Mountain View, California. The museum encompasses most of computing history, including the abacus, early Texas Instruments advanced calculators (like the TI-99), and previously housed a mechanical Babbage Machine computer from the 1800s. In our recent tour of the Computer History Museum, we focused on the predecessors to modern computing – the first hard drive, first supercomputers, first transistorized computers, mercury and core memory, and vacuum tube computing.

Pascal is the imminent GPU architecture from nVidia, poised to compete (briefly) with AMD's Polaris, which will later turn into AMD Vega and Navi. Pascal will shift nVidia onto the new memory technologies introduced on AMD's Fury X, but with the updated HBM2 architecture (High Bandwidth Memory architecture version 2); Intel is expected to debut HBM2 on its Xeon Phi HPC CPUs later this year. View previous GTC coverage of Mars 2030 here.

HBM2 operates on a 4096-bit memory bus with a maximum theoretical throughput of 1TB/s. HBM version 1, for reference, operated at 128GB/s per stack on a 1024-bit wide memory bus. On the Fury X – again, for reference – this calculated-out to approximately 512GB/s. HBM2 will double the theoretical memory bandwidth of HBM1.

NVidia's Graphics Technology Conference (GTC 2016) kicked-off with a keynote from CEO Jen-Hsun Huang, who frontloaded the event with topics on AI, software development kits, self-driving cars, machine-learning, and VR. Of what we've seen so far, the most interesting has been the new “Mars 2030” VR demo, which used photogrammetry to rebuild Mars using satellite flybys. The Mars 2030 VR demo was helmed by computer industry icon Steve Wozniak, whom Huang selected for the Woz's recently vocalized wishes to fly one-way to Mars. Wozniak, providing the most candid form of stage presence, declared “wow! I'm getting dizzy! I'm gonna fall out of this chair.”

Huang: “... Well, Woz, that was not a helpful comment.”

But the exchange sums-up the presentation well – somewhat playful, experimental with technology, and entertaining.

Our initial GTC keynote coverage consists primarily of VR and SDK talking points, with a focus on the Mars 2030 demo.

We’re covering the Graphics Technology Conference in San Jose this week – a show overflowing with low-level information on graphics silicon and VR – and so have themed our Ask GN episode 14 around silicon products.

This week’s episode talks CPU thread assignment & simultaneous multi-threading, VR-ready IGPs, the future of the IGP & CPU, and Dx12 topics. We also briefly talk Linux gaming, but that requires a lengthier, future video for proper depth.

If you’ve got questions for next week’s episode, as always, leave them below or on the video comments section (which is where we check first).

Video card drivers are almost as important as the hardware with which they interface; without stable and ongoing driver support, a GPU can't be fully utilized to a level that exercises its strengths in the field. AMD has long battled to improve perception of its drivers – a fight we endorsed upon the release of Catalyst successor Radeon Settings – and has continued that battle at GDC 2016.

“For a long time, people keep saying, 'well, AMD has great hardware – what about our drivers?'” AMD Corporate VP Roy Taylor told us in an interview, “I don't want to hear that anymore, all right?” The response was given in our interview following AMD's Capsaicin event, which featured industry luminaries in game development and VR.

This isn't news to anyone who's followed the site through our Pascal and GDDR5X posts, but new leaks by “benchlife.info” indicate nVidia's dedication to use both HBM and GDDR5X. The Chinese language site has previously proven to be reliable in its leaks.

GPU architecture has come to a head with memory. Pascal will host HBM2 on its high-end devices, but the cost makes low-end and mid-range cards (the equivalent of a current GTX 960) impossibly expensive. NVidia plans to deploy Micron's new GDDR5X high-bit-rate memory for a cheaper alternative to HBM2; GDDR5X is more expensive than GDDR5, landing it between the oldest (current) and newest (current) technology in product cost.

It's been a few months since our “Ask GN” series had its last installment. We got eleven episodes deep, then proceeded to plunge into the non-stop game testing and benchmarking of the fourth quarter. Alas, following fan requests and interest, we've proudly resurrected the series – not the only thing resurrected this week, either.

So, amidst Games for Windows Live and RollerCoaster Tycoon's re-re-announcement of mod support, we figured we'd brighten the week with something more promising: DirectX & Vulkan cherry-picked topics, classic GPU battles, and power supply testing questions. There's a bonus question at the end, too.

Ashes of Singularity has become the poster-child for early DirectX 12 benchmarking, if only because it was the first-to-market with ground-up DirectX 12 and DirectX 11 support. Just minutes ago, the game officially updated its early build to include its DirectX 12 Benchmark Version 2, making critical changes that include cross-brand multi-GPU support. The benchmark also made updates to improve reliability and reproduction of results, primarily by giving all units 'god mode,' so inconsistent deaths don't impact workload.

For this benchmark, we tested explicit multi-GPU functionality by using AMD and nVidia cards at the same time, something we're calling “SLIFire” for ease. The benchmark specifically uses MSI R9 390X Gaming 8G and MSI GTX 970 Gaming 4G cards vs. 2x GTX 970s, 1x GTX 970, and 1x R9 390X for baseline comparisons.

Page 1 of 11

  VigLink badge