Microsoft introduces newest in-house AI chip — Maia 200 is faster than other bespoke Nvidia competitors, built on TSMC 3nm with 216GB of HBM3e
Source
Published
TL;DR
AI GeneratedMicrosoft has unveiled its latest AI accelerator, the Microsoft Azure Maia 200, which outperforms custom offerings from competitors like Amazon and Google. The Maia 200 is touted as Microsoft's most efficient inference system, offering 30% more performance per dollar than its predecessor, the Maia 100. Built on TSMC's 3nm process node, the chip boasts 140 billion transistors and 216 GB of HBM3e memory with 7 TB/s of bandwidth. The Maia 200 excels in raw compute power, efficiency, and memory hierarchy, positioning it as a significant player in the AI chip market. Microsoft has already deployed the Maia 200 in its US Central Azure data center, with future deployments planned, emphasizing its commitment to environmental sustainability and community welfare amidst the AI boom.