Back to home
Technology

On-Package Memory With UCIe To Improve Bandwidth Density And Power Efficiency (AMD, Intel Corp.)

Source

SemiEngineering

Published

TL;DR

AI Generated

Researchers from Intel Corporation and AMD have published a technical paper proposing the use of On-Package Memory with Universal Chiplet Interconnect Express (UCIe) to address the memory wall challenge in emerging computing applications like Artificial Intelligence (AI). By enhancing UCIe with memory semantics, they aim to provide power-efficient bandwidth and cost-effective on-package memory solutions. Their approaches involve reusing existing LPDDR6 and HBM memory through a logic die connected to the SoC using UCIe, as well as having the DRAM die support UCIe natively. These methods promise significantly higher bandwidth density, lower latency, reduced power consumption, and lower costs compared to current HBM4 and LPDDR on-package memory solutions.

Read Full Article

Similar Articles

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix are warning of AI-driven memory shortages potentially lasting until 2027 and beyond, with HBM demand surging. The companies are struggling to meet demand as customers reserve supply years in advance, impacting the broader DRAM market. The shortages are fueled by the need for high-speed memory in AI infrastructure, particularly HBM, which is challenging to manufacture. Despite efforts to develop alternative memory technologies, the demand for existing memory remains overwhelming, prompting companies to invest in expanding production capacity. The memory crunch is part of a larger trend of resource shortages in the tech industry due to the rapid growth of AI infrastructure.

Tom's Hardware
SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SAIMEMORY, a SoftBank Corp subsidiary working with Intel, has secured Japanese government subsidies for its ZAM memory technology project, aiming to develop a power-efficient HBM alternative for AI workloads. ZAM, a potential next-gen AI memory solution, is part of NEDO’s Post-5G Infrastructure Enhancement R&D Project. The project combines US government-backed research, Intel's R&D, and SoftBank's AI infrastructure focus. ZAM's unique design promises higher capacity, greater bandwidth, and 40% lower power consumption compared to traditional HBM, potentially challenging existing memory solutions in the market. The technology is still in early stages, with mass production projected for around 2029, supported by a consortium including SoftBank, Fujitsu, RIKEN, and government backing through NEDO.

Tom's Hardware
SemiEngineering

Chiplet Standards Aim For Plug-n-Play

Chiplet standards are crucial for creating a marketplace where chiplets can be easily interchanged like LEGOs. Various standards are being developed to ensure interoperability and physical composability of chiplets, including die-to-die interconnect standards like Bunch of Wires (BoW) and Universal Chiplet Interconnect Express (UCIe). These standards cover system architecture, security, power delivery, data semantics, physical placement, testing, and more. Organizations like the Open Compute Project (OCP) are leading efforts to standardize chiplet-related aspects, such as packaging descriptions and system architectures. The goal is to pave the way for a plug-and-play chiplet marketplace, although challenges related to practical and economic factors still exist.

SemiEngineering
Huawei unveils new Atlas 350 AI accelerator with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM — claims 2.8x more performance than Nvidia's H20

Huawei unveils new Atlas 350 AI accelerator with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM — claims 2.8x more performance than Nvidia's H20

Huawei has introduced the Atlas 350 AI accelerator, featuring 1.56 PFLOPS of FP4 compute and up to 112GB of HBM, claiming 2.8 times more performance than Nvidia's H20. This new NPU is based on the Ascend 950PR chip and is optimized for FP4 precision, allowing for larger AI models on the same hardware with less memory. Despite challenges due to U.S. sanctions, Huawei's Atlas 350 showcases impressive specs and is priced competitively at around $16,000, aiming to reduce reliance on foreign hardware in China's AI landscape.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.