Radeon Software Crimson Edition version 17.4.1 is now live. Along with some bug fixes, the bulk of this release is additional VR support.

AMD is making good on their promise to support asynchronous reprojection for both Oculus Rift and SteamVR. Oculus’ “Asynchronous Spacewarp” is now usable on R9 Fury, 290 and 390 series cards, while SteamVR’s “Asynchronous Reprojection” is usable on RX 480 and 470s with Windows 10.

The GPU market has been shaken up recently with the release of the nVidia GTX 1080 Ti and AMD’s inevitable Vega launch. Discounts on GTX 1080s and RX 400 series cards are available and widespread at this point, so we’ve highlighted some deals for those looking to upgrade or build a new PC in 2017.

Benchmarking Mass Effect: Andromeda immediately revealed a few considerations for our finalized testing. Frametimes, for instance, were markedly lower on the first test pass. The game also prides itself in casting players into a variety of environs, including ship interiors, planet surfaces of varying geometric complexity (generally simpler), and space stations with high poly density. Given all these gameplay options, we prefaced our final benchmarking with an extensive study period to research the game’s performance in various areas, then determine which area best represented the whole experience.

Our Mass Effect: Andromeda benchmark starts with definitions of settings (like framebuffer format), then goes through research, then the final benchmarks at 4K, 1440p, and 1080p.

Buildzoid's latest contribution to our site is his analysis of the GTX 1080 Ti Founders Edition PCB and VRM, including some additional thoughts on shunt modding the card for additional OC headroom. We already reviewed the GTX 1080 Ti here, modded it for increased performance with liquid cooling, and we're now back to see if nVidia's reference board is any good.

This time, it turns out, the board is seriously overbuilt and a good option for waterblock users (or users who'd like to do a Hybrid mod like we did, considering the thermal limitations of the FE cooler). NVidia's main shortcoming with the 1080 Ti FE is its FE cooler, which limits clock boosting headroom even when operating stock. Here's Buildzoid's analysis:

We’ve fixed the GTX 1080 Ti Founders Edition ($700) card. As stated in the initial review, the card performed reasonably close to nVidia’s “35% > 1080” metric when at 4K resolutions, but generally fell closer to 25-30% faster at 4K. That’s really not bad – but it could be better, even with the reference PCB. It’s the cooler that’s holding nVidia’s card back, as seems to be the trend given GPU Boost 3.0 + FE cooler designs. A reference card is more versatile for deployment to the SIs and wider channel, but for our audience, we can rebuild it. We have the technology.

“Technology,” here, mostly meaning “propylene glycol.”

AMD is set to roll out 17.3.2 Radeon drivers bound for the highly anticipated Mass Effect: Andromeda, for which we recently discussed graphics settings and recommended specs.

The new drivers mostly prime the RX 400 series cards for the upcoming Mass Effect launch—most demonstrably the RX 480 8GB, of which AMD notes a 12% performance increase when compared to drivers 17.3.1. Additionally, the drivers will add an “AMD optimized” tessellation profile.

Our review of the nVidia GTX 1080 Ti Founders Edition card went live earlier this morning, largely receiving praise for jaunts in performance while remaining the subject of criticism from a thermal standpoint. As we've often done, we decided to fix it. Modding the GTX 1080 Ti will bring our card up to higher native clock-rates by eliminating the thermal limitation, and can be done with the help of an EVGA Hybrid kit and a reference design. We've got both, and started the project prior to departing for PAX East this weekend.

This is part 1, the tear-down. As the content is being published, we are already on-site in Boston for the event, so part 2 will not see light until early next week. We hope to finalize our data on VRM/FET and GPU temperatures (related to clock speed) immediately following PAX East. These projects are always exciting, as they help us learn more about how a GPU behaves. We did similar projects for the RX 480 and GTX 1080 at launch last year.

Here's part 1:

The GTX 1080 Ti posed a fun opportunity to roll-out our new GPU test bench, something we’ve been working on since end of last year. The updated bench puts a new emphasis on thermal testing, borrowing methodology from our EVGA ICX review, and now analyzes cooler efficacy as it pertains to non-GPU components (read: MOSFETs, backplate, VRAM).

In addition to this, of course, we’ll be conducting a new suite of game FPS benchmarks, running synthetics, and preparing for overclocking and noise. The last two items won’t make it into today’s content given PAX being hours away, but they’re coming. We will be starting our Hybrid series today, for fans of that. Check here shortly for that.

If it’s not obvious, we’re reviewing nVidia’s GTX 1080 Ti Founders Edition card today, follow-up to the GTX 1080 and gen-old 980 Ti. Included on the benches are the 1080, 1080 Ti, 1070, 980 Ti, and in some, an RX 480 to represent the $250 market. We’re still adding cards to this brand new bench, but that’s where we’re starting. Please exercise patience as we continue to iterate on this platform and build a new dataset. Last year’s was built up over an entire launch cycle.

With nVidia’s recent GTX 1080Ti announcement and GTX 1080 price cut, graphics cards have seen reductions in cost this week. As stated in our last sales post, hardware sales are hard to come by right now, but we have still found some deals worth noting. We found an RX 480 8GB for $200, and a GTX 1080 for $500. DDR4 prices are still high, but some savings can be had on a couple of kits of DDR4 by G.SKILL.

The finer distinctions between DDR and GDDR can easily be masked by the impressive on-paper specs of the newer GDDR5 standards, often inviting an obvious question with a not-so-obvious answer: Why can’t GDDR5 serve as system memory?

In a simple response, it’s analogous to why a GPU cannot suffice as a CPU. Being more incisive, CPUs are comprised of complex cores using complex instruction sets in addition to on-die cache and integrated graphics. This makes the CPU suitable for the multitude of latency sensitive tasks often beset upon it; however, that aptness comes at a cost—a cost paid in silicon. Conversely, GPUs can apportion more chip space by using simpler, reduced-instruction-set based cores. As such, GPUs can feature hundreds, if not thousands of cores designed to process huge amounts of data in parallel. Whereas CPUs are optimized to process tasks in a serial/sequential manner with as little latency as possible, GPUs have a parallel architecture and are optimized for raw throughput.

While the above doesn’t exactly explicate any differences between DDR and GDDR, the analogy is fitting. CPUs and GPUs both have access to temporary pools of memory, and just like both processors are highly specialized in how they handle data and workloads, so too is their associated memory.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge