This post will introduce you a type of memory – HBM2. The focus will be placed on the definition and specifications of hbm2 memory.
What Is HBM2 Memory
What does HBM mean? HBM2 refers to High Bandwidth Memory 2. It is a type of high-speed computer memory interface that is used in 3D-stacked DRAM (dynamic random access memory) in AMD GPUs (also called graphics cards). You can often find the HBM2 memory on Samsung, AMD, and SK Hynix. Certainly, it is also utilized on server, high-performance computing and networking, as well as client space.
The HBM2 offers great performance/watt and has a low power consumption. HBM2 benefits a lot of people who would like to get the maximum bandwidth especially in a power-constrained environment.
This matches the standard of center-focused GPUs that are working on AI calculations or dense computing nodes in an HPC cluster. However, HBM2 doesn’t leave much space for RAM to work in consumer space.
Top recommendation: What Is RAM Disk & Whether You Should Pick One
The Specs of HBM2 and Its Variations
High Bandwidth Memory was adopted as an industry standard by JEDEC in October 2013, while the HBM2, the second generation, was adopted in Jan. 2016 by the same association. With the development of technology, HBM2 also develops many versions.
At present, there are 3 available HBM2 versions on the market. They are JESD235C, JESD235B, and JESD235A. The newly updated version JESD235C can support faster speeds as high as 3.2Gbps/pin. It pushes the highest speed for the whole stack of HBM2 memory to 410GB/sec during the process.
Manufacturers have been preparing for upgrade for a long time. For example, Samsung has made related declarations about their Flashbolt HBM2 memory. Look into the latest HBM2 version JESD235C. It is a relatively small update to the HBM2 standard, which is a more measured update with high performance.
The most obvious change is that the HBM2 standard adds support for 2 higher data rates that brings 2.8Gbps/pin into the standard. Compared with the previous maximum rate 2.4Gbps/pin, the new update offers you a 33% (approximately) increase in memory bandwidth. To preview the detailed data of each update, you can refer to the table below.
|Max Bandwidth Per Pin||3.2Gb/s||2.4Gb/s||2Gb/s|
|Max Die Capacity||2GB||2GB||1GB|
|Max Dies Per Stack||12||12||8|
|Max Capacity Per Stack||24GB||24GB||8GB|
|Max Bandwidth Per Stack||410Gb/s||307.2Gb/s||256Gb/s|
|Effective Bus Width (1 Stack)||1024bit||1024bit||1024bit|
The latest update even keeps a single stack of HBM2 that makes it quite competitive on the bandwidth front. Considering from the aspect of cost and capacity, HBM3 memory is still a premium memory technology.
While the latest HBM2 doesn’t have improvements on memory capacities (either through density or larger stacks, the maximum size of a single stack is still 24GB. This size allows a 4-stack configuration to get up to 96GB memory.
You should pay attention to the fact that as the HBM2 standard itself doesn’t have direct power limits, the standard will specify regular operating voltages. Generally speaking, HBM utilizes less power but offers high bandwidth than on DDR4 or GDDR5 memory that have smaller chips. This feature seems very attractive to graphics card vendors.
The working principle of HBM is to stack the memory chips vertically on top of one another. These chips are connected through through-silicon vias and micro-bumps. In addition, the HBM memory bus is wider than that of other types of DRAM memory because of 128-bit channels per die.
These updates will not be the ends of HBM forever. Though it is not available at present, the HBM3 standard is under discussion.
The Bottom Line
As the main features of HBM2 and its new updates have been illustrated specifically, you may have a basic understanding of HBM2 after reading the post. Besides, you are also aware of the development of HBM in the coming years. Here comes the end of the post.