Back to home
Technology

Samsung earns Nvidia certification for its HBM3 memory — stock jumps 5% as company finally catches up to SK hynix and Micron in HBM3E production

Source

Tom's Hardware

Published

TL;DR

AI Generated

Samsung has received Nvidia certification for its 12-layer HBM3E chips, leading to a 5% stock price increase as it catches up to SK hynix and Micron in HBM3E production. Despite delays, Samsung's HBM3E chips are expected to be used in Nvidia DGX B300 cards soon. The industry is already looking ahead to HBM4, which promises higher capacity and reduced power consumption, with Samsung aiming for volume production by the first half of 2026. Investors are optimistic about Samsung's progress in the memory chip market.

Read Full Article

Similar Articles

Micron teams up with TSMC to deliver HBM4E, targeted for 2027 — collaboration could enable further customization

Micron teams up with TSMC to deliver HBM4E, targeted for 2027 — collaboration could enable further customization

Micron has announced a collaboration with TSMC to produce the base logic die for its upcoming HBM4E memory, set for production in 2027. This partnership will allow for customization of memory solutions for AI workloads, positioning Micron at the forefront of AI system design. HBM4E will offer higher data rates and customized options, with Micron focusing on efficiency and flexibility in its design approach. The move aligns with the industry trend towards customizable memory solutions, particularly crucial for next-generation data center GPUs from Nvidia and AMD. Micron's partnership with TSMC aims to make HBM4E a standard memory tier for AI infrastructure in the coming years.

Tom's Hardware
Huawei reveals long-range Ascend chip roadmap — three-year plan includes ambitious provision for in-house HBM with up to 1.6 TB/s bandwidth

Huawei reveals long-range Ascend chip roadmap — three-year plan includes ambitious provision for in-house HBM with up to 1.6 TB/s bandwidth

Huawei has unveiled its long-term Ascend chip strategy, with plans for four new chips over the next three years, including the Ascend 950PR and 950DT in early 2026. The company aims to incorporate in-house HBM technology with up to 1.6 TB/s bandwidth in its upcoming chips, challenging competitors like SK hynix and Samsung. Despite facing constraints due to U.S. sanctions limiting access to advanced nodes and packaging lines, Huawei is pushing forward with ambitious plans for AI compute clusters like the Atlas 950 and 960 systems. To compete with Nvidia, Huawei will need a comprehensive platform that matches in training, efficiency, and model throughput.

Tom's Hardware
SemiEngineering

Chip Industry Technical Paper Roundup: Sept 16

Semiconductor Engineering has added new technical papers to its library, including topics like an analog optical computer for AI inference, a robust memristor for neuromorphic computing, and vulnerabilities in hardware mitigations like Spectre v2. Other papers cover semiconductor packaging challenges, automotive digital twins, thermal vulnerabilities in high-bandwidth memory architectures, and GPU memory wall solutions. These papers come from research organizations like Microsoft, Purdue University, ETH Zurich, NIST, and more. For more semiconductor research papers, visit Semiconductor Engineering's website.

SemiEngineering
SK hynix finishes HBM4 development, ready for mass production: 10Gbps per pin, above 8Gbps spec

SK hynix finishes HBM4 development, ready for mass production: 10Gbps per pin, above 8Gbps spec

SK hynix has completed the development of HBM4 memory, surpassing the 8Gbps specification with speeds of 10Gbps per pin. The company is now prepared for mass production of this high-performance memory technology.

TweakTown

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.