HBM2 Launches in 3Q16 - Up to 16GB per GPU, 256GB/s per Stack

By Published July 18, 2016 at 11:22 am

New video cards are coming out furiously and bringing with them new manufacturing processes and better price-to-performance ratios.

One of newest memory technologies on the market is HBM (High Bandwidth Memory), introduced on the R9 Fury X. HBM stacks 4 memory dies atop an interposer (packaged on the substrate) to get higher density modules, while also bringing down power consumption and reducing physical transaction distance. HBM is not located on the GPU die itself, but is on the GPU package – much closer than PCB-bound GDDR5/5X memory modules.

We’ve gotten word that SK Hynix, a big supplier in the memory market, is going to launch its second iteration of HBM in 3Q16. This aligns with AMD’s Vega and nVidia’s “Big Pascal” card releases, both around the corner. AMD's Vega architecture will use HBM2 at some level, but not all SKUs will necessarily have HBM2. NVidia’s next flagship may use HBM2 as well, or the company could stick with the GDDR5X used in the GTX 1080. The Tesla P100 Accelerator is already on track for HBM2.

gp100-pascal-hbm2-arch

gp100-pascal-hbm2-arch2

When comparing HBM2 to HBM1, the new memory is a big improvement in terms of bandwidth and performance. HBM1 featured >100GB/s per stack, while the new HBM2 architecture will feature 256GB/s per stack and 204GB/s per stack speeds. SK Hynix’s HBM1 and HBM2 both feature 1.2V VDDQ, so the power consumption remains the same across both HBM versions while increasing performance. By comparison, GDDR5X offers 48GB/s per-chip memory bandwidth and GDDR5 offers 28GB/s per-chip memory bandwidth. HBM2 will allow up to 16GB VRAM per GPU.

HBM has much better performance per Watt than GDDR5 and GDDR5X. The VDD/VDDQ for HBM2 is 1.2V, which is lower than GDDR5X's VDD/VDDQ of 1.35V and GDDR5's VDD/VDDQ of 1.5v. Reducing the power spent on memory helps drive-down overall watt draw, but also allows some allocation of power elsewhere on the card. Both nVidia and AMD have pushed towards a memory power draw reduction upwards of 40% by optimizing color compression technologies.

We are interested to see what power consumption will be like on AMD’s Vega cards with HBM2 when combined with its new 14nm FinFet GPU manufacturing process. The same goes for nVidia, if the company opts to use HBM2 in one of the next big cards. NVidia moved to 16nm FinFET process nodes with Pascal.

amd-gpu-roadmap-2016

Should HBM become more popular and more mainstream down the road, it could be used in laptop components as well. This move is natural given the power constraints laptops are up against, and pairs with globally reduced TDPs to lengthen battery life. HBM2’s lower voltage and higher bandwidth would mean better gaming and system performance. Looking way down the road, CPUs will begin – and already have, with some server parts – integrating HBM as an alternative or add-on to system DRAM. We will let you know if we hear any news on HBM2 in the immediate future.

HBM2 Specs (SK Hynix)

The specs of these parts are shown below:

Density ORG. Speed Part Number PKG. Feature Availability
4GB 256GB/s 2.0Gbps H5VR32ESM4H-20C 5mKGSD 4Hi stack, VDD/VDDQ=1.2V Q3’ 16
4GB 204GB/s 1.6Gbps H5VR32ESM4H-12C 5mKGSD 4Hi stack, VDD/VDDQ=1.2V Q3’ 16

- Chris Zele

Last modified on July 18, 2016 at 11:22 am

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge