Back to home
Technology

Interconnect Innovations In High Bandwidth Memory: Part 2

Source

SemiEngineering

Published

TL;DR

AI Generated

Interconnect technology in high bandwidth memory (HBM) is evolving with two main approaches: microbump technology and hybrid bonding. Both are adapting to meet the demands of next-generation HBM, aiming for increased I/O density to support higher bandwidth and improved performance. Microbumps are being pushed to smaller dimensions, with some memory designers achieving bump sizes below 10µm in high volume manufacturing. Hybrid bonding, on the other hand, allows for finer interconnect pitches and offers benefits like higher bandwidth and improved performance. Manufacturers are exploring process control solutions to address challenges in both microbumps and hybrid bonding technologies.

Read Full Article

Similar Articles

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix are warning of AI-driven memory shortages potentially lasting until 2027 and beyond, with HBM demand surging. The companies are struggling to meet demand as customers reserve supply years in advance, impacting the broader DRAM market. The shortages are fueled by the need for high-speed memory in AI infrastructure, particularly HBM, which is challenging to manufacture. Despite efforts to develop alternative memory technologies, the demand for existing memory remains overwhelming, prompting companies to invest in expanding production capacity. The memory crunch is part of a larger trend of resource shortages in the tech industry due to the rapid growth of AI infrastructure.

Tom's Hardware
SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SAIMEMORY, a SoftBank Corp subsidiary working with Intel, has secured Japanese government subsidies for its ZAM memory technology project, aiming to develop a power-efficient HBM alternative for AI workloads. ZAM, a potential next-gen AI memory solution, is part of NEDO’s Post-5G Infrastructure Enhancement R&D Project. The project combines US government-backed research, Intel's R&D, and SoftBank's AI infrastructure focus. ZAM's unique design promises higher capacity, greater bandwidth, and 40% lower power consumption compared to traditional HBM, potentially challenging existing memory solutions in the market. The technology is still in early stages, with mass production projected for around 2029, supported by a consortium including SoftBank, Fujitsu, RIKEN, and government backing through NEDO.

Tom's Hardware
SemiEngineering

Silicon Photonics Lights The Way To More Efficient Data Centers

Silicon photonics is paving the way for more efficient data centers by potentially increasing bandwidth density and reducing power consumption, particularly driven by AI workloads. However, challenges such as process compatibility, thermal issues, and mechanical stress arise due to the use of various materials in photonic interconnects. Integrated electro-optical I/O modules are the desired outcome, but design and process complexities need to be addressed. The article delves into the technical aspects of silicon photonics, including the components involved, such as light sources, modulators, waveguides, and photodetectors, as well as the challenges of integrating optical and electronic components for efficient data transmission.

SemiEngineering
Huawei unveils new Atlas 350 AI accelerator with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM — claims 2.8x more performance than Nvidia's H20

Huawei unveils new Atlas 350 AI accelerator with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM — claims 2.8x more performance than Nvidia's H20

Huawei has introduced the Atlas 350 AI accelerator, featuring 1.56 PFLOPS of FP4 compute and up to 112GB of HBM, claiming 2.8 times more performance than Nvidia's H20. This new NPU is based on the Ascend 950PR chip and is optimized for FP4 precision, allowing for larger AI models on the same hardware with less memory. Despite challenges due to U.S. sanctions, Huawei's Atlas 350 showcases impressive specs and is priced competitively at around $16,000, aiming to reduce reliance on foreign hardware in China's AI landscape.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.