Intel and SoftBank, through their subsidiary Saimemory, have been developing an alternative technology to the popular high-bandwidth memory (HBM) to provide more bandwidth and capacity for memory modules used with powerful AI accelerators. At VLSI 2026 in June, Saimemory is scheduled to present a paper on the newly developed HB3DM memory, which is based on Z-Angle Memory (ZAM) technology. This name refers to the vertical (Z-axis) stacking of dies, similar to traditional HBM. However, Intel aims to achieve impressive results using state-of-the-art manufacturing technology. The first generation of HB3DM will feature a total of nine layers, stacked using a hybrid bonding technique for 3D chip placement. At the base will be a logic layer that manages data movement within the chip, with eight DRAM layers on top for data storage. Each layer will include about 13,700 TSVs for hybrid bonding. In terms of capacity, HB3DM will offer about 1.125 GB per layer, translating to 10 GB per memory module. Intel can achieve approximately 0.25 Tb/s of memory bandwidth per mm², and for a 10 GB module with a 171 mm² die area, we can expect around 5.3 TB/s per module. These impressive figures could quickly overshadow competing HBM4 memory, as HB3DM offers much higher bandwidth. HBM4 provides speeds of around 2 TB/s per stack, less than half of what HB3DM will deliver. However, HB3DM is limited by capacity, with only 10 GB available, whereas HBM4 can reach up to 48 GB per stack. Intel may increase the number of layers in production as HB3DM progresses, but for now, it is emerging as a bandwidth leader.