AMD RX 480 Specs
|AMD RX 480||AMD RX 470||AMD RX 460|
|Architecture||Polaris 10||Polaris 10||Polaris 11|
|Compute Units (CUs)||36||32||14|
|Base / Boost Clock||1120MHz / 1266MHz||? / ?||? / ?|
|COMPUTE Performance||>5 TFLOPS||>4 TFLOPS||>2 TFLOPS|
|Graphics Command Processor (GCP)||1||1||1|
|Pixels Output / Clock||32||?||16|
|VRAM Capacity||4GB GDDR5 @ 7Gbps
8GB GDDR5 @ 8Gbps
|4GB GDDR5||2GB GDDR5|
|Memory Speed||7Gbps (4GB model)
8Gbps (8GB model)
|Memory Bandwidth||224GB/s (4GB model)
256GB/s (8GB model)
|Display Port||1.3 HBR / 1.4 HDR||1.3/1.4 HDR||1.3/1.4 HDR|
|Release Date||June 29||Mid-July||End of July|
Polaris 10 vs. Polaris 11 Specs & Architecture
|Polaris 10||Polaris 11|
|Compute Units (CUs)||36||16|
|COMPUTE Performance||“>5 TFLOPS”||“>2 TFLOPS”|
|Architecture||Gen 4 GCN||Gen 4 GCN|
|Playback Support||4K encode/decode||4K encode/decode|
|Output Standard||DP1.3/1.4 HDR||DP1.3/1.4 HDR|
Note: We're Flashing VBIOS
We've got two RX 480s, but they're both 8GB models: A retail card and a press sample. The press sample has flashable VBIOS that can “convert” the card into a 4GB model, thereby dropping the memory clock to 1750MHz from 2000MHz and reducing VRAM capacity. To this end, we're actually testing the same graphics card twice, just once before the VBIOS flash and once after. That deserved to be pointed out because, although the card should be indicative of “real” 4GB RX 480 performance, there is always a chance something is different – even if slightly.
But we're confident in the procedure.
Note also that CrossFire RX 480 4GB cards vs. CrossFire RX 480 8GB cards may exaggerate some of our results by way of increasing computational ability but retaining a lower VRAM capacity. This is something we will test as more cards become available to us. For today, it's a single RX 480 GPU we're benchmarking, looking only into 4GB vs. 8GB.
Should you like to see results from other neighboring video cards, like the GTX 1070, 970, or R9 390X, please bounce over to our RX 480 Review. That content contains the complete charts, where this content only presents two cards.
Examples of VRAM Consumption in Games
We've run several game graphics optimization guides in the past, including the below:
In these guides, and elsewhere on the site, one of our examined metrics is measured VRAM request size. The VRAM metric reported by GPU-Z and other tools is almost always a representation of VRAM requested, not actual VRAM utilization by the game. This means that the application has asked for the VRAM allocation – some games take an “as much as I can get” approach – but doesn't also mean that the allocated resources are consumed. To see what's consumed, we have to measure real-world performance with frametime analysis. Our data presents this as 1% and 0.1% low metrics, which reveal stuttering in gameplay that is perceptible to the user, but which doesn't always show up well on AVG FPS charts.
VRAM is most commonly known to impact texture quality. Load-up Skyrim with 4K texture mods, and video memory will suddenly make a big impact on how quickly those textures are loaded into play. This is particularly true in the Skyrim mods example because little optimization occurs for high resolution texture mods, so resources are more aggressively demanded. Increasing texture quality in games will increase VRAM consumption. The loaded texture size increases to represent its higher resolution, and therefore consumes more memory; it just becomes a question of how many of these get loaded, and at what size.
Texture filtration processes also consume VRAM. Anti-aliasing is the next most common example, where multi-tap AA samples pixels with multiplicative increases. 2xMSAA will sample each pixel on a screen twice, for instance, to determine the blended output of a pixel. By extension, 4xMSAA and 8xMSAA will sample four or eight times per pixel. At 1920x1080, that's 1920*1080 = 2,073,600 pixels, then multiplied by the sample count of 4xMSAA for 2073600 * 4 = 8,294,400 samples occurring per frame. Pushing 60FPS would be 497.6MP/s. That's a lot of data, and is precisely why reducing anti-aliasing is one of the first recommendations to framerate recovery.
And it's also VRAM intensive, in a lot of versions of AA. Temporal anti-aliasing will do a little bit more on the computational side, since it's now occurring frame-to-frame rather than per-frame, but we don't test TXAA or TSSAA in this analysis.
We tested using our GPU test bench, detailed in the table below. Our thanks to supporting hardware vendors for supplying some of the test components.
The latest AMD drivers (16.6.2 RX 480 press) were used for testing. The 16.7.1 drivers were used for GTA V, which resolve some stuttering issues seen in the review. Game settings were manually controlled for the DUT. All games were run at presets defined in their respective charts. We disable brand-supported technologies in games, like The Witcher 3's HairWorks and HBAO. All other game settings are defined in respective game benchmarks, which we publish separately from GPU reviews. Our test courses, in the event manual testing is executed, are also uploaded within that content. This allows others to replicate our results by studying our bench courses. In AMD Radeon Settings, we disable all AMD "optimization" of graphics settings, e.g. filtration, tessellation, and AA techniques. This is to ensure that games are compared as "apples to apples" graphics output. We leave the application in control of its graphics, rather than the IHV.
Windows 10-64 build 10586 was used for testing.
Each game was tested for 30 seconds in an identical scenario, then repeated three times for parity. Some games have multiple settings or APIs under test.
Average FPS, 1% low, and 0.1% low times are measured. We do not measure maximum or minimum FPS results as we consider these numbers to be pure outliers. Instead, we take an average of the lowest 1% of results (1% low) to show real-world, noticeable dips; we then take an average of the lowest 0.1% of results for severe spikes.
|GN Test Bench 2015||Name||Courtesy Of||Cost|
|Video Card||This is what we're testing!||-||-|
|CPU||Intel i7-5930K CPU||iBUYPOWER
|Memory||Corsair Dominator 32GB 3200MHz||Corsair||$210|
|Motherboard||EVGA X99 Classified||GamersNexus||$365|
|Power Supply||NZXT 1200W HALE90 V2||NZXT||$300|
|SSD||HyperX Savage SSD||Kingston Tech.||$130|
|Case||Top Deck Tech Station||GamersNexus||$250|
|CPU Cooler||NZXT Kraken X41 CLC||NZXT||$110|
For Dx12 and Vulkan API testing, we use built-in benchmark tools and rely upon log generation for our metrics. That data is reported at the engine level.
Video Cards Tested
Assassin's Creed Syndicate Benchmark – AMD RX 480 4GB vs. 8GB
Assassin's Creed Syndicate has been revived to our bench for the RX 480 comparison. Settings were configured to “Very High” (with GameWorks technologies disabled) and 4xMSAA. Note that differences still appear even with lower or no anti-aliasing, as has been a trend with the Assassin's Creed series.
At 1080p, we're seeing 53FPS AVG on the 8GB RX 480, which is a 14.5% increase over the 45.3FPS of the 4GB card. The difference is noticeable for more reason than a strict average, too. The 8GB card runs at 32.3FPS 0.1% low 53FPS average, and the 4GB card falls to just 25.7FPS 0.1% low – or a 25.68% increase for the 8GB model. This is noticeable in play, where we see significantly more frame tearing on the 4GB card, resulting in choppier perceived movement and necessitating a settings reduction to achieve better frametimes. 1% lows are also separated by a 25% difference.
There is a clear advantage for the 8GB cards in Assassin's Creed.
Shadow of Mordor Benchmark – AMD RX 480 4GB vs. 8GB
Shadow of Mordor will request all of the VRAM available, for the most part, but that doesn't mean it's actively engaging that requested VRAM.
At 1080p, we're seeing results of 78.3FPS average, 41FPS 1% low, and 36FPS 0.1% lows on the RX 480 8GB card. Dropping to 4GB, those numbers barely change – there's a 0.3FPS difference. The benchmark creates reproducible results and is accurate in the depiction of a 0.3FPS measurable difference, but it is not a perceptible difference.
Basically the same, so there's no real advantage to an extra 4GB in this game.
Metro: Last Light Benchmark – AMD RX 480 4GB vs. 8GB
Metro: Last Light is one of the most reproducible tests on our bench, but it's also older, and VRAM consumption is low across the board. The results aren't too surprising.
We're seeing a difference of 3.3% between AVG FPS at 1080p, and a difference of ~2.47% at 1440p. That'd place these results under the “measurable, but imperceptible” category.
Mirror's Edge Catalyst Benchmark – AMD RX 480 4GB vs. 8GB
Mirror's Edge Catalyst is processing and post-FX intensive, which means it'll be an interesting test case for our upcoming CrossFire benchmark – but for now, we're still looking at VRAM.
At 1080p/Ultra, we see the AVG FPS between 8GB and 4GB is about the same: 74.3FPS vs. 72.3FPS, or 2.73% difference. Even the minimums are good, with mostly identical results between resolutions. At 1440p Ultra, we're still seeing gaps of ~1FPS maximally, or a 0.64% difference between averages.
But then we look at 1080p/Hyper, which increases texture quality, mesh quality, and other graphics settings. 1080/Hyper produces dismal stuttering after a few minutes of play. This isn't something you'll see reflected in a short benchmark period, but if playing the game properly for a few moments, the VRAM begins saturating and framerates can drop hard. Just from this chart, we see our average looks like 40FPS vs. 53FPS, or a 27.96% difference, but the 0.1% lows are even worse – down to 18FPS from 31.3FPS, or a 53.96% difference. These lows are noticeable as severe stutters in framerate output.
But there's more to it than that. We ran multiple extra passes on Mirror's Edge specifically because of this issue, and found that the first few sets of data were poor in performance, but not completely unplayable. After playing for some time, though, every single time, we'd see drops to 12FPS 0.1% lows and in the 20s for average framerate:
|AVG FPS||1% LOW||0.1% LOW|
By dropping to 4GB, we've gone from a relatively playable 53FPS AVG to an unplayable range of 26-47FPS. That makes the 8GB card a better solution for this particular game.
GTA V Benchmark – AMD RX 480 4GB vs. 8GB
GTA V was completely retested on the RX 480 cards for this article. After speaking with AMD's Scott Wasson, we received an unreleased, in-development driver update that should resolve the previously mentioned GTA V stuttering issues from launch day. The driver update is 16.7.1, and this is the only game in the test that runs this driver set. The rest are still on 16.6.2.
We see an average FPS (1080p) of 85.3 on the 8GB card at 1080p, with the 4GB card at 83FPS – that's an imperceptible but measurable difference. 1% lows are also close – only 3.35% different – and 0.1% lows are 54.3FPS vs. 51FPS on the 8GB and 4GB options, respectively. Another imperceptible difference.
Ashes of Singularity Benchmark – AMD RX 480 4GB vs. 8GB
On both Dx11 and Dx12, Ashes of the Singularity shows no perceptible difference between the RX 480 4GB and RX 480 8GB. Frametime performance is also nearly identical, with fractions of a millisecond differentiating the two devices.
The Talos Principle Benchmark – AMD RX 480 4GB vs. 8GB
The Talos Principle shows minimal differences across Dx11 and Vulkan when comparing 4GB and 8GB cards. We see the largest delta at 1080p strictly between Vulkan metrics, producing an ~8% gap between the cards. Dx11 performance is within a 2FPS (1.76%) range. 1440p shows even closer results.
No clear advantage to 8GB here.
The Division Benchmark – AMD RX 480 4GB vs. 8GB
At 1080p, the Division shows no performance difference between the 8GB and 4GB cards when comparing AVG FPS, but 0.1% lows begin to reveal a delta of 15.76%. This is perceptible at some points in gameplay as a visible slowdown in framerate. That difference mostly disintegrates once we get to 1440p, resultant of the GPU choking elsewhere in the pipe before VRAM begins to come into play.
Call of Duty: Black Ops III Benchmark – AMD RX 480 4GB vs. 8GB
Note: Despite offering some advance anti-aliasing options, we disabled them in Black Ops III and ran FXAA instead, which is pretty lightweight.
Black Ops III is another title with mixed results. Black Ops has also been an optimized title for us, and one which has generally pushed AMD a bit further up the ranks than some other games on the bench. At 1080p, we see the 8GB card running 132.3FPS average, 105FPS 1% lows, and 93FPS 0.1% lows – all very tightly timed and well-suited for 120Hz gaming. You'd get to 144Hz with some tweaks. Moving to 4GB, our average framerate drops by 4.17% to 127FPS, and 0.1% lows drop by 13.98% to 80FPS.
Increasing to 1440p, we see a bit more of an impact to those 0.1% lows. The 8GB RX 480 is now at 83.3FPS AVG, 67.3FPS 1% low, and 61.3FPS 0.1% lows. The average is only slightly faster than the 4GB card, at 80FPS, but the 0.1% lows on the 4GB card are 28.7% slower than the 8GB card. That means stutters become visible during gameplay.
4K produces similar results. We're at 41FPS AVG for the 8GB card, 34FPS 1% lows, and 32FPS 0.1% lows. On the 4GB card, that changes to similar AVG and 1% lows, but a decrease of 60.3% on the 0.1% low metrics.
Conclusion: Is the RX 480 8GB Better than 4GB? Is it Worth it?
As we've found previously – like with the 4GB vs. 2GB GTX 960 – the differences are present, but depend on the game. It's clear that the gaming world is trending toward >4GB, and with AMD's focus on promoting the RX 480 as a good CrossFire setup and overclocker, that does mean that some additional RX 480 4GB vs. 8GB configurations should be considered. We'll try to test these soon, but an example would be CF 4GB 480s vs. CF 8GB 480s, where some of these differences will likely be exaggerated in Dx11 games. Compute potential is increased but the VRAM becomes a more limiting factor. We'll test this in an upcoming article and video.
As for whether or not 8GB is worth it, it really depends. If you're playing games like Black Ops, Mirror's Edge with higher quality settings, or Asssassin's Creed and similar games, it is absolutely better to get the 8GB card. Deltas nearing 30% make a big difference to perceived fluidity of framerate.
But that's not all games. A lot of the games we tested show no perceptible difference, despite having measurable differences. They might be different by a few FPS, but not much more than that. Ashes of Singularity, Talos Principle, Metro: Last Light, and Shadow of Mordor saw minimal impact from the VRAM capacity change. Black Ops, Mirror's Edge, and Assassin's Creed had big differences that would actually be relevant. The Division was mixed.
You should definitely buy 8GB ($240) for games that are more texture heavy (or if planning to apply heavy texture mods) and if you like anti-aliasing and texture filtration, because that's where we're seeing the gains. Otherwise, 4GB is an acceptable way to save $40.
CrossFire may also be an instance where dual 8GB cards makes the most sense, but we'll test that as soon as we're able to.
One thing is for certain, though – we've finally trended out of the 2GB class of card, and would no longer recommend 2GB video cards for core gamers. Maybe for casual machines and HTPCs on the cheap, but not core gaming.
Editorial: Steve “Lelldorianx” Burke
Video: Andrew “ColossalCake” Coleman