Back to home
Technology

Challenges In Stacking HBM

Source

SemiEngineering

Published

TL;DR

AI Generated

AI data centers are aiming for denser high-bandwidth memory, with stacking layers expected to increase from 8 to 24 by 2030. The main challenge lies in the interconnects and aligning microbumps, particularly as bump pitch decreases to less than 10 microns at 16 layers. Damon Tsai from Onto Innovation discusses strategies to reduce stress-induced warpage, necessary changes in HBM architectures, and the implications of incorporating hybrid bonding and co-packaged optics in these devices. The article highlights the complexities and advancements needed in stacking HBM to meet future demands in semiconductor engineering.

Read Full Article

Similar Articles

Wafer-on-Wafer Hybrid Bonding: Reticle Placements To Achieve Efficient NW Topologies (ETH Zurich)

Wafer-on-Wafer Hybrid Bonding: Reticle Placements To Achieve Efficient NW Topologies (ETH Zurich)

Researchers from ETH Zurich have published a technical paper on "Network Design for Wafer-Scale Systems with Wafer-on-Wafer Hybrid Bonding." They explore how wafer-on-wafer bonding can enhance communication bandwidth in large language models. By strategically placing reticles on wafers, they achieve network topologies that improve throughput by up to 250%, reduce latency by up to 36%, and decrease energy consumption per transmitted byte by up to 38%. The study offers insights into optimizing communication performance in wafer-scale systems.

SemiEngineering
Micron teams up with TSMC to deliver HBM4E, targeted for 2027 — collaboration could enable further customization

Micron teams up with TSMC to deliver HBM4E, targeted for 2027 — collaboration could enable further customization

Micron has announced a collaboration with TSMC to produce the base logic die for its upcoming HBM4E memory, set for production in 2027. This partnership will allow for customization of memory solutions for AI workloads, positioning Micron at the forefront of AI system design. HBM4E will offer higher data rates and customized options, with Micron focusing on efficiency and flexibility in its design approach. The move aligns with the industry trend towards customizable memory solutions, particularly crucial for next-generation data center GPUs from Nvidia and AMD. Micron's partnership with TSMC aims to make HBM4E a standard memory tier for AI infrastructure in the coming years.

Tom's Hardware
Samsung earns Nvidia certification for its HBM3 memory — stock jumps 5% as company finally catches up to SK hynix and Micron in HBM3E production

Samsung earns Nvidia certification for its HBM3 memory — stock jumps 5% as company finally catches up to SK hynix and Micron in HBM3E production

Samsung has received Nvidia certification for its 12-layer HBM3E chips, leading to a 5% stock price increase as it catches up to SK hynix and Micron in HBM3E production. Despite delays, Samsung's HBM3E chips are expected to be used in Nvidia DGX B300 cards soon. The industry is already looking ahead to HBM4, which promises higher capacity and reduced power consumption, with Samsung aiming for volume production by the first half of 2026. Investors are optimistic about Samsung's progress in the memory chip market.

Tom's Hardware
Huawei reveals long-range Ascend chip roadmap — three-year plan includes ambitious provision for in-house HBM with up to 1.6 TB/s bandwidth

Huawei reveals long-range Ascend chip roadmap — three-year plan includes ambitious provision for in-house HBM with up to 1.6 TB/s bandwidth

Huawei has unveiled its long-term Ascend chip strategy, with plans for four new chips over the next three years, including the Ascend 950PR and 950DT in early 2026. The company aims to incorporate in-house HBM technology with up to 1.6 TB/s bandwidth in its upcoming chips, challenging competitors like SK hynix and Samsung. Despite facing constraints due to U.S. sanctions limiting access to advanced nodes and packaging lines, Huawei is pushing forward with ambitious plans for AI compute clusters like the Atlas 950 and 960 systems. To compete with Nvidia, Huawei will need a comprehensive platform that matches in training, efficiency, and model throughput.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.