We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Google deploys new Axion CPUs and seventh-gen Ironwood TPU — training and inferencing pods beat Nvidia GB300 and shape 'AI Hypercomputer' model

Source

Tom's Hardware

Published

TL;DR

AI Generated

Google Cloud has unveiled new AI-focused instances powered by its Axion CPUs and seventh-gen Ironwood TPUs, designed for training and low-latency inference of large-scale AI models. The Ironwood pods can scale up to 9,216 AI accelerators, surpassing Nvidia's capabilities with 42.5 FP8 ExaFLOPS for training and inference. These pods can be clustered into Google's AI Hypercomputer, integrating compute, storage, and networking for enhanced efficiency. Google also introduced its Axion CPUs, offering up to 50% greater performance and 60% higher energy efficiency compared to modern x86 CPUs, along with other configurations like C4A, N4A, and C4A Metal for various workloads.