Back to home
Technology

SK hynix finishes HBM4 development, ready for mass production: 10Gbps per pin, above 8Gbps spec

Source

TweakTown

Published

TL;DR

AI Generated

SK hynix has completed the development of HBM4 memory, surpassing the 8Gbps specification with speeds of 10Gbps per pin. The company is now prepared for mass production of this high-performance memory technology.

Read Full Article

Similar Articles

WEBINAR: HBM4E Advances Bandwidth Performance for AI Training

WEBINAR: HBM4E Advances Bandwidth Performance for AI Training

Rambus is launching its HBM4E memory controller IP product tailored for AI training applications to address the increasing pressure on memory technologies due to the rise of AI applications and high-end GPU platforms. The "memory wall" challenge is highlighted, emphasizing the need for advanced memory architectures to prioritize raw bandwidth for AI training. HBM technology is positioned as a solution for high-performance GPUs targeting AI training servers, offering wider buses, faster transfer rates, and increased stack heights. Rambus leverages its expertise to deliver high transfer speeds with its HBM4E controller, providing significant bandwidth for memory devices. The webinar hosted by Rambus delves into AI use cases, HBM architecture, and the capabilities of the HBM4E controller for those looking to optimize AI training servers and racks.

SemiWiki
Nvidia demonstrates Rubin Ultra tray, the world's first AI GPU with 1TB of HBM4E memory — new chips will slot into Kyber racks

Nvidia demonstrates Rubin Ultra tray, the world's first AI GPU with 1TB of HBM4E memory — new chips will slot into Kyber racks

Nvidia showcased its upcoming Rubin Ultra tray, the first AI GPU with 1TB of HBM4E memory, set to launch in 2027. This new GPU platform will utilize the Kyber rack design, integrating 144 GPU packages for enhanced performance compared to current models. The Rubin Ultra package features four compute chiplets and a new packaging technology, potentially stacked for efficiency. The Kyber rack will also introduce liquid cooling and a 7th Generation NVLink switch to accommodate more GPUs, promising significant performance improvements.

Tom's Hardware
Micron enters high-volume production of HBM4 for Nvidia Vera Rubin - 2.3x bandwidth improvement and 20% boost in power efficiency

Micron enters high-volume production of HBM4 for Nvidia Vera Rubin - 2.3x bandwidth improvement and 20% boost in power efficiency

Micron has begun high-volume production of its HBM4 36GB 12-Hi memory for Nvidia's Vera Rubin GPU platform, showcasing a 2.3x bandwidth increase and a 20% boost in power efficiency over its previous HBM3E version. The company also announced the production of the industry's first PCIe 6.0 data center SSD and a new SOCAMM2 module, all aimed at the Vera Rubin ecosystem. Micron has shipped samples of a 48GB 16H HBM4 stack to customers, indicating a 33% capacity increase per HBM placement over the 36GB 12H product. Additionally, the 9650 SSD, targeting AI workloads, has entered mass production, offering significant performance improvements over PCIe 5.0 SSDs.

Tom's Hardware
HBM4E Raises The Bar For AI Memory Bandwidth

HBM4E Raises The Bar For AI Memory Bandwidth

The article discusses how memory bandwidth has become a critical factor in AI innovation, with the introduction of HBM4E aiming to address this bottleneck. As AI models grow larger, the need for fast data access into accelerators is crucial. HBM4E doubles the bandwidth of its predecessor, HBM4, while maintaining power efficiency and low latency. The evolution of HBM memory performance has seen wider interfaces and increased data rates, significantly enhancing memory bandwidth. Rambus has introduced an HBM4E Controller Core IP to maximize the capabilities of HBM4E, ensuring reliability, robustness, and flexibility in AI memory architectures.

SemiEngineering

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.