We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Meta's new MTIA lineup joins hyperscalers' unified push for dedicated inferencing chips — companies diversify AI chips in effort to diversify from sole reliance on Nvidia

Source

Tom's Hardware

Published

TL;DR

AI Generated

Meta has unveiled a new lineup of custom chips, the Meta Training and Inference Accelerator (MTIA) 300, 400, 450, and 500, optimized for AI inference workloads with a focus on HBM memory bandwidth. This move aligns Meta with Google, AWS, and Microsoft in developing dedicated inferencing chips to reduce reliance on Nvidia. The MTIA chips are designed to increase HBM bandwidth and compute FLOPs significantly across successive generations, with the 450 and 500 models set for mass deployment in 2027. The chips utilize a modular chiplet architecture for compatibility and rapid development cycles, challenging Nvidia's dominance in AI chip technology. This shift towards specialized inferencing chips reflects a broader trend among tech giants to diversify their AI chip portfolios for more efficient and cost-effective performance.