Back to home
Technology

Huawei reveals long-range Ascend chip roadmap — three-year plan includes ambitious provision for in-house HBM with up to 1.6 TB/s bandwidth

Source

Tom's Hardware

Published

TL;DR

AI Generated

Huawei has unveiled its long-term Ascend chip strategy, with plans for four new chips over the next three years, including the Ascend 950PR and 950DT in early 2026. The company aims to incorporate in-house HBM technology with up to 1.6 TB/s bandwidth in its upcoming chips, challenging competitors like SK hynix and Samsung. Despite facing constraints due to U.S. sanctions limiting access to advanced nodes and packaging lines, Huawei is pushing forward with ambitious plans for AI compute clusters like the Atlas 950 and 960 systems. To compete with Nvidia, Huawei will need a comprehensive platform that matches in training, efficiency, and model throughput.

Read Full Article

Similar Articles

Huawei-powered mini-PC debuts with Huawei AI chip and 192GB of memory — Orange Pi AI Studio Pro wields Ascend 310 chip with 352 TOPS of AI performance, but relies on a single USB-C port

Huawei-powered mini-PC debuts with Huawei AI chip and 192GB of memory — Orange Pi AI Studio Pro wields Ascend 310 chip with 352 TOPS of AI performance, but relies on a single USB-C port

Orange Pi has launched the Orange Pi AI Studio, a mini-PC powered by a Huawei Ascend 310 AI chip with 176 TOPS of AI performance. The base model offers 48GB or 96GB of memory, while the Pro model combines two mini-PCs for 352 TOPS of performance and 96GB or 192GB of memory. The device features a single USB-C port for connectivity, potentially requiring a hub for additional peripherals. Priced at over $2,350, the AI mini PCs are currently available in China and on AliExpress. The product supports Ubuntu and Linux, with Windows support expected in the future.

Tom's Hardware
Micron teams up with TSMC to deliver HBM4E, targeted for 2027 — collaboration could enable further customization

Micron teams up with TSMC to deliver HBM4E, targeted for 2027 — collaboration could enable further customization

Micron has announced a collaboration with TSMC to produce the base logic die for its upcoming HBM4E memory, set for production in 2027. This partnership will allow for customization of memory solutions for AI workloads, positioning Micron at the forefront of AI system design. HBM4E will offer higher data rates and customized options, with Micron focusing on efficiency and flexibility in its design approach. The move aligns with the industry trend towards customizable memory solutions, particularly crucial for next-generation data center GPUs from Nvidia and AMD. Micron's partnership with TSMC aims to make HBM4E a standard memory tier for AI infrastructure in the coming years.

Tom's Hardware
Samsung earns Nvidia certification for its HBM3 memory — stock jumps 5% as company finally catches up to SK hynix and Micron in HBM3E production

Samsung earns Nvidia certification for its HBM3 memory — stock jumps 5% as company finally catches up to SK hynix and Micron in HBM3E production

Samsung has received Nvidia certification for its 12-layer HBM3E chips, leading to a 5% stock price increase as it catches up to SK hynix and Micron in HBM3E production. Despite delays, Samsung's HBM3E chips are expected to be used in Nvidia DGX B300 cards soon. The industry is already looking ahead to HBM4, which promises higher capacity and reduced power consumption, with Samsung aiming for volume production by the first half of 2026. Investors are optimistic about Samsung's progress in the memory chip market.

Tom's Hardware
$37 billion 'Stargate of China' project takes shape — country is converting farmland into data centers to centralize AI compute power

$37 billion 'Stargate of China' project takes shape — country is converting farmland into data centers to centralize AI compute power

China is embarking on a $37 billion project to convert farmland into data centers in Wuhu, aiming to centralize AI compute power and compete with the U.S. The project, dubbed the "Stargate of China," involves consolidating dispersed data centers into a unified network using Huawei's UB-Mesh tech. The data centers will serve major cities like Shanghai and Hangzhou, enhancing AI applications for residents. China is strategically locating new data centers near large populations and repurposing existing facilities to focus on training LLMs, while also planning to link underutilized data centers for redundancy and surplus compute sales. This move is part of China's efforts to rapidly develop its AI capabilities and reduce reliance on foreign technology amid a competitive chip dominance battle with the U.S.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.