Back to home

Articles tagged with "AI, Accelerators, Hardware"

Microsoft's Maia 200 AI accelerator has 216GB of memory, outperforms Amazon and Google chips

Microsoft's Maia 200 AI accelerator has 216GB of memory, outperforms Amazon and Google chips

Microsoft's Maia 200 AI accelerator boasts 216GB of memory and surpasses Amazon and Google chips in performance. This new accelerator is designed to enhance AI workloads and is equipped with a large memory capacity to handle complex tasks efficiently. Microsoft aims to compete with other tech giants in the AI hardware space with this advanced accelerator.

TweakTown
Microsoft introduces newest in-house AI chip — Maia 200 is faster than other bespoke Nvidia competitors, built on TSMC 3nm with 216GB of HBM3e

Microsoft introduces newest in-house AI chip — Maia 200 is faster than other bespoke Nvidia competitors, built on TSMC 3nm with 216GB of HBM3e

Microsoft has unveiled its latest AI accelerator, the Microsoft Azure Maia 200, which outperforms custom offerings from competitors like Amazon and Google. The Maia 200 is touted as Microsoft's most efficient inference system, offering 30% more performance per dollar than its predecessor, the Maia 100. Built on TSMC's 3nm process node, the chip boasts 140 billion transistors and 216 GB of HBM3e memory with 7 TB/s of bandwidth. The Maia 200 excels in raw compute power, efficiency, and memory hierarchy, positioning it as a significant player in the AI chip market. Microsoft has already deployed the Maia 200 in its US Central Azure data center, with future deployments planned, emphasizing its commitment to environmental sustainability and community welfare amidst the AI boom.

Tom's Hardware
OpenAI and Broadcom to co-develop 10GW of custom AI chips in yet another blockbuster AI partnership — deployments start in 2026

OpenAI and Broadcom to co-develop 10GW of custom AI chips in yet another blockbuster AI partnership — deployments start in 2026

OpenAI and Broadcom have announced a partnership to co-develop and deploy 10 gigawatts of custom AI accelerators and rack systems, with deployment starting in 2026 and targeted completion by 2029. This collaboration signifies OpenAI's move away from Nvidia GPUs towards in-house accelerators paired with Broadcom's networking and hardware IP. The companies have been working together for over 18 months, with the new systems utilizing Ethernet-based networking for scalability and vendor neutrality. OpenAI's hardware commitments now total around 26 gigawatts, including partnerships with Nvidia and AMD. The deal with Broadcom offers OpenAI expertise in ASIC design and supply chain maturity, while also contributing to a trend of major AI customers exploring custom accelerators.

Tom's Hardware

No more articles to load

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.