HBM memory is found on the Sapphire R9 Fury where it was intended to provide better bandwidth for game rendering. In practice however the card came up as mediocre as compared to the GDDR5 based RX 480.
HBM originally was intended as a replacement for other memories like GDDR, driven by some of the leading semiconductor companies, particularly NVIDIA and AMD. Those companies are still heavily involved in driving its evolution in the JEDEC Task Group, where NVIDIA is the chair and AMD is one of the main contributors.
The current HBM2E standard (an extension of second-generation HBM2) supports stacking of up to eight stacks, which would allow an HBM2E SiP to provide 128GB of capacity. In practice, however, currently available HBM2E capacities top out at 16GB.
The AMD RX Vega 64 ($499) uses HBM2 which was intended to improve the card’s performance. The RX Vega 56 ($399) also uses HBM2 memory. The Radeon VII ($699) also uses HBM2.
Originally, high-bandwidth memory was seen by the graphics companies as a clear step in the evolutionary direction, but then the networking and data center community realized HBM could add a new tier of memory in their memory hierarchy for more bandwidth, and all the things that are driving the data center — lower latency, faster access, less delay, lower power.
Now HBM3 will bring a 2X bump in bandwidth and capacity per stack, as well as some other benefits. What was once considered a “slow and wide” memory technology to reduce signal traffic delays in off-chip memory is becoming significantly faster and wider. In some cases, it is even being used for L4 cache.
SK Hynix has been able to develop an HBM3 DRAM chip operating at 819GB/sec. That much bandwidth would perk up higher refresh panels. Most likely advanced server users will need more bandwidth as well.
While JEDEC has not released details on the yet-to-be-ratified HBM3 specification, Rambus reports its HBM3 subsystem bandwidth will increase to 8.4 Gbps, compared with 3.6 Gbps for HBM2e. Products that implement HBM3 are expected to ship early 2023.
Work on stacking RAM chips suggests that some corporate needs for extreme performance hardware will pay the way for gaming down the road. AI systems need a lot of memory bandwidth which means big money to spend. AMD is likely to make some high-end cards with HBM3 down the road. At present most video cards are using GDDR6 or GDDR6X which is less costly than HBM2.