Back to home
Technology

HBM roadmaps for Micron, Samsung, and SK hynix: To HBM4 and beyond

Source

Tom's Hardware

Published

TL;DR

AI Generated

Read Full Article

Similar Articles

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix are warning of AI-driven memory shortages potentially lasting until 2027 and beyond, with HBM demand surging. The companies are struggling to meet demand as customers reserve supply years in advance, impacting the broader DRAM market. The shortages are fueled by the need for high-speed memory in AI infrastructure, particularly HBM, which is challenging to manufacture. Despite efforts to develop alternative memory technologies, the demand for existing memory remains overwhelming, prompting companies to invest in expanding production capacity. The memory crunch is part of a larger trend of resource shortages in the tech industry due to the rapid growth of AI infrastructure.

Tom's Hardware
SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SAIMEMORY, a SoftBank Corp subsidiary working with Intel, has secured Japanese government subsidies for its ZAM memory technology project, aiming to develop a power-efficient HBM alternative for AI workloads. ZAM, a potential next-gen AI memory solution, is part of NEDO’s Post-5G Infrastructure Enhancement R&D Project. The project combines US government-backed research, Intel's R&D, and SoftBank's AI infrastructure focus. ZAM's unique design promises higher capacity, greater bandwidth, and 40% lower power consumption compared to traditional HBM, potentially challenging existing memory solutions in the market. The technology is still in early stages, with mass production projected for around 2029, supported by a consortium including SoftBank, Fujitsu, RIKEN, and government backing through NEDO.

Tom's Hardware
Huawei unveils new Atlas 350 AI accelerator with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM — claims 2.8x more performance than Nvidia's H20

Huawei unveils new Atlas 350 AI accelerator with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM — claims 2.8x more performance than Nvidia's H20

Huawei has introduced the Atlas 350 AI accelerator, featuring 1.56 PFLOPS of FP4 compute and up to 112GB of HBM, claiming 2.8 times more performance than Nvidia's H20. This new NPU is based on the Ascend 950PR chip and is optimized for FP4 precision, allowing for larger AI models on the same hardware with less memory. Despite challenges due to U.S. sanctions, Huawei's Atlas 350 showcases impressive specs and is priced competitively at around $16,000, aiming to reduce reliance on foreign hardware in China's AI landscape.

Tom's Hardware
WEBINAR: HBM4E Advances Bandwidth Performance for AI Training

WEBINAR: HBM4E Advances Bandwidth Performance for AI Training

Rambus is launching its HBM4E memory controller IP product tailored for AI training applications to address the increasing pressure on memory technologies due to the rise of AI applications and high-end GPU platforms. The "memory wall" challenge is highlighted, emphasizing the need for advanced memory architectures to prioritize raw bandwidth for AI training. HBM technology is positioned as a solution for high-performance GPUs targeting AI training servers, offering wider buses, faster transfer rates, and increased stack heights. Rambus leverages its expertise to deliver high transfer speeds with its HBM4E controller, providing significant bandwidth for memory devices. The webinar hosted by Rambus delves into AI use cases, HBM architecture, and the capabilities of the HBM4E controller for those looking to optimize AI training servers and racks.

SemiWiki

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.