Back to home
Technology

New 3D-stacked memory tech seeks to dethrone HBM for AI inference — d-Matrix claims 3DIMC will be 10x faster and 10x more efficient

Source

Tom's Hardware

Published

TL;DR

AI Generated

Memory startup d-Matrix is introducing 3D stacked memory technology, 3DIMC, as a faster and more efficient alternative to HBM for AI inference tasks. The company's Pavehawk 3DIMC is designed to perform computations within the memory itself, offering improved latency, bandwidth, and efficiency gains. d-Matrix is already looking ahead to its next-gen Raptor, claiming it will outpace HBM by 10x in inference tasks while using 90% less power. The move towards specialized hardware for specific computational tasks, like AI inference, is gaining traction in the tech industry. This new memory technology could provide a cost-effective alternative to HBM, which is currently produced by a limited number of companies and expected to see rising prices.

Read Full Article

Similar Articles

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix warn AI-driven memory shortages could last until 2027 and beyond, as HBM demand explodes — customers already reserving supply years ahead, while the wider DRAM market begins to tighten

Samsung and SK hynix are warning of AI-driven memory shortages potentially lasting until 2027 and beyond, with HBM demand surging. The companies are struggling to meet demand as customers reserve supply years in advance, impacting the broader DRAM market. The shortages are fueled by the need for high-speed memory in AI infrastructure, particularly HBM, which is challenging to manufacture. Despite efforts to develop alternative memory technologies, the demand for existing memory remains overwhelming, prompting companies to invest in expanding production capacity. The memory crunch is part of a larger trend of resource shortages in the tech industry due to the rapid growth of AI infrastructure.

Tom's Hardware
CEO Interview with Xianxin Guo of Lumai

CEO Interview with Xianxin Guo of Lumai

Xianxin Guo, CEO of Lumai, discusses the company's optical computing technology for AI and data center acceleration, aiming to address power efficiency and scalability limitations of traditional silicon-based approaches. Lumai's hybrid optical-electronic design enhances compute efficiency by leveraging light for key operations, reducing energy consumption and breaking through AI system bottlenecks. The technology is well-suited for high-throughput AI inference workloads in data centers, offering a more cost-effective and scalable solution. By focusing on optical compute, Lumai differentiates itself from competitors and aims to redefine AI compute efficiency for long-term scalability and performance gains. The company engages with customers through collaborative discussions and partnership-driven approaches to integrate optical computing seamlessly into existing AI infrastructure.

SemiWiki
SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SoftBank subsidiary working with Intel to develop radical new ZAM memory is now receiving Japanese gov't subsidies — new memory designed as a lower-power HBM for AI workloads

SAIMEMORY, a SoftBank Corp subsidiary working with Intel, has secured Japanese government subsidies for its ZAM memory technology project, aiming to develop a power-efficient HBM alternative for AI workloads. ZAM, a potential next-gen AI memory solution, is part of NEDO’s Post-5G Infrastructure Enhancement R&D Project. The project combines US government-backed research, Intel's R&D, and SoftBank's AI infrastructure focus. ZAM's unique design promises higher capacity, greater bandwidth, and 40% lower power consumption compared to traditional HBM, potentially challenging existing memory solutions in the market. The technology is still in early stages, with mass production projected for around 2029, supported by a consortium including SoftBank, Fujitsu, RIKEN, and government backing through NEDO.

Tom's Hardware
Intel launches Wildcat Lake as Core Series 3 for value laptops and edge systems — six consumer SKUs built on 18A promise 'all-day' battery life

Intel launches Wildcat Lake as Core Series 3 for value laptops and edge systems — six consumer SKUs built on 18A promise 'all-day' battery life

Intel has unveiled its Core Series 3 mobile processors, known as Wildcat Lake, featuring six consumer SKUs and one edge-only variant, all built on 18A technology promising extended battery life. The lineup includes various configurations with P-cores, E-cores, NPUs, and Xe3 integrated GPUs, offering up to 40 platform TOPS and hybrid AI capabilities. Memory support tops out at LPDDR5x-7467 or DDR5-6400 in a single-channel setup, with performance improvements compared to previous generations. Initial laptop designs from Acer, HP, MSI, and others are set to launch, targeting students, small businesses, and edge deployments.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.