Before proceeding: This endeavor is entirely at the risk of the user, and there is a possibility of “bricking” or permanently damaging the hardware during this process.
In 4GB vs. 8GB AMD RX 480 benchmarking, our testing uncovered improvement in just a few titles – but the improvements were substantial when present. It is no mystery that early press samples of the card allowed for flashing to 4GB, which resulted in a 1750MHz memory clock and locked 4GB of the VRAM. This is reasonable, as media obviously wanted to test both versions of the card, but AMD wanted to limit sampling. We actually liked the way this was handled, given the option between a flashable sample and strictly an 8GB sample.
But there's more to it than that: Consumers have reported success flashing VBIOS from sold 4GB retail samples, resulting in 8GB cards. Let's talk about why AMD's shipping of “locked” cards makes sense, risks, and how to perform the procedure.
Multi-SKU launches of GPUs are sort of interesting. The RX 480 ships in 4GB and 8GB models, with some other less-than-obvious differences under the hood. GDDR5 speed, for instance, operates at 7Gbps on the reference 4GB model, as opposed to 8Gbps on the reference 8GB model (which we reviewed in great detail). There's potential for confusion in the marketplace with multiple SKUs, and the value proposition gets muddied between the $200 4GB RX 480 and the $240 8GB RX 480. That's not counting AIB partner cards, either, and those are rolling out.
In this benchmark, we compare the RX 480 4GB vs. RX 480 8GB to determine if the difference is "worth it" in games. We're testing GTA V, Assassin's Creed, Call of Duty, Shadow of Mordor, Ashes, and more.
For a previous look at VRAM differences, check our (now dated) GTX 960 2GB vs. 4GB comparison.
The AMD RX 480 “Hybrid” quest we embarked upon revealed some additional overclocking headroom, but also prompted a good opportunity to demonstrate live RX 480 overclocking. We've returned to showcase that today, alongside a top-level explanation of GPU core voltage, core frequency, fan RPM, power % target, and stability.
Note that there are a few disclaimers to be made with any type of overclocking: First, it's likely that any such endeavor voids the warranty, at least if exiting a range permissible by the AIB partner or OEM. That's because overvolting and power increases can potentially cause damage to chips long-term (or even immediately, if no restrictions are in place), and that's especially true on cards where cooling may not adequately cool critical components governing overclocking – like the MOSFETs and other VRM components. That's not to scare anyone away, though; overclocking is fairly safe if following basic rules of small, incremental stepping and using guides (and using OEM-provided software, which often has restrictions for safety). It's just that overclocking is always an “at your own risk” venture, and it doesn't hurt to remind everyone.
This guide explains how to use WattMan to overclock the AMD Radeon RX 480 GPU, showcases voltage (for overvoltage or undervoltage), power target, and some performance metrics.
The final part of our AMD Radeon RX 480 Hybrid build is complete. We've conducted testing on the RX 480 with liquid cooling, successfully yielding additional overclocking headroom and reducing temperatures by 59%. We also ended up hitting 1.15V to the core when overvolting and overclocking, something we talk about more below.
The first part of this AMD RX 480 liquid cooling guide tore-down the video card, the second part built it back up with an Arctic Accelero Hybrid III and liquid cooler, and our new video and article explore the results. The short of it: Liquid cooling an AMD RX 480 significantly improves the temperatures, the noise output, and provides marginal extra overclocking room.
This video is a follow-up to our popular GTX 1080 Hybrid series, if you missed that.
We're putting the AMD RX 480 under water. Our GTX 1080 Hybrid project revealed significant improvements to overclock stability and lowered the 1080's thermals by 100%, an important boost versus the Founders Edition ($700). This endeavor opened our eyes to new means of testing component limits, and makes for a fun DIY project to push new hardware to its absolute peak performance – or make it die trying.
Following our RX 480 endurance and thermal findings, we believe it's possible to improve thermals, reduce overall power consumption (by eliminating the need for a fan spinning at 4000+ RPM), and significantly cut noise output. The overclocked RX 480 was able to sustain its 1340MHz core only because we ran the fan so fast, and by switching to a liquid cooler (powered externally, not by the video card), we'll free-up some power for the core and memory. This will also allow us to reduce overall fan RPMs on our mod's VRM fan, hopefully cutting noise levels to something lower than the ~55-60dB output experienced in our overclocking test. Our overclock, although reasonable, is entirely unbearable because of its high noise output and would be unacceptable for any real-world user or home.
We're fixing that.
Anyone who's already seen our exhaustive RX 480 review & benchmark is likely aware of our new noise testing and fan speed vs. time/frequency plots. The video was embedded in that review, but it's worth discussing in greater depth.
The test is a mix of subjective and objective noise analysis. The decibel testing was conducted prior to getting on camera, with a different setup than is shown, but we moved the bench for demonstration purposes (into the video set). Our noise testing methodology is detailed further below. As for the subjective testing – that's the new part.
Subjective noise analysis of cards is important, as our raw decibel output values do not tell the full story (and we don't presently have a good, data-hardened way to plot frequency spectrum analysis). Two fans that operate at 50dB may have completely different noises. One fan might be high pitch in nature – or maybe it's got a high pitched whine accompanying the normal low-frequency whirring – while another fan is low pitch. Depending on the user, the lower pitch fan (despite being equally loud in dB output) will likely be more bearable than an incessant whine.
This is a test that's been put through the paces for just about every generation of PCI Express, and it's worth refreshing now that the newest line of high-end GPUs has hit the market. The curiosity is this: Will a GPU be bottlenecked by PCI-e 3.0 x8, and how much impact does PCI-e 3.0 x16 have on performance?
We decided to test that question for internal research, but ended up putting together a small report for publication.
This is primarily a video project that revisits our popular SSD Architecture post from 2014. All of that content remains relevant to this day – SSD architecture has not substantially changed at a low level – but it's been deserving of a refresh. NAND Flash comprises the actual storage component of the SSD, and impacts more than just capacity; endurance, speed, and the cost-per-GB metric are all impacted by NAND Flash selection. The industry has slowly reached parity between TLC and MLC NAND devices for the mainstream and gaming segments, with VNAND getting a steady push through Samsung's channels. As for how MLC and TLC actually work, though, we turn to our content.
With this update, we've introduced a 3D animation to help visualize the complexities of voltage states and program/erases occurring on the disk actively. The original graphics and text of our architecture article can be found on this page.
Our GTX 1070 SLI benchmarking endeavor began with an amusing challenge – one which we've captured well in our forthcoming video: The new SLI bridges are all rigid, and that means cards of various heights cannot easily be accommodated as the bridges only work well with same-height cards. After some failed attempts to hack something together, and after researching the usage of two ribbon cables (don't do this – more below), we ultimately realized that a riser cable would work. It's not ideal, but the latency impact should be minimal and the performance is more-or-less representative of real-world SLI framerates for dual GTX 1070s in SLI.
Definitely a fun challenge. Be sure to subscribe for our video later today.
The GTX 1070 SLI configuration teetered in our test rig, no screws holding the second card, but it worked. We've been told that there aren't any plans for ribbon cable versions of the new High Bandwidth Bridges (“HB Bridge”), so this new generation of Pascal GPUs – if using the HB Bridge – will likely drive users toward same-same video card arrays. This step coincides with other simplifications to the multi-GPU process with the 10-series, like a reduction from triple- and quad-SLI to focus just on two-way SLI. We explain nVidia's decision to do this in our GTX 1080 review and mention it in the GTX 1070 review.
This GTX 1070 SLI benchmark tests the framerate of two GTX 1070s vs. a GTX 1080, 980 Ti, 980, 970, Fury X, R9 390X, and more. We briefly look at power requirements as well, helping to provide a guideline for power supply capacity. The joint cost of two GTX 1070s, if buying the lowest-cost GTX 1070s out there, would be roughly $760 – $380*2. The GTX 1070 scales up to $450 for the Founders Edition and likely for some aftermarket AIB partner cards as well.
Rounding-out our Best Of coverage from Computex 2016 – and being written from a plane over the Pacific – we're back to recap some of the major GTX 1080 AIB cards from the show. AMD's RX480 was only just announced at Computex, and so board partner versions are not yet ready (and weren't present), and the GTX 1070 only had one card present. For that reason, we're focusing the recap on GTX 1080 GP104-400 video cards from AIB partners.
Until a point at which all of these cards have been properly in our hands for review in the lab, we'd recommend holding off on purchases – but we're getting there. We've already looked at the GTX 1080 reference card (“Founders Edition,” by new nomenclature) and built our own GTX 1080 Hybrid. The rest will be arriving soon enough.
For now, though, here's a round-up of the EVGA, ASUS, Gigabyte, and MSI AIB GTX 1080s at Computex. You can read/watch for more individualized info at each of these links: