We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Examining Nvidia's 60 exaflop Vera Rubin POD — how seven chips underpin company's 40 rack AI factory supercomputer

Source

Tom's Hardware

Published

TL;DR

AI Generated

Nvidia unveiled seven chips forming the Vera Rubin platform at GTC 2026, designed for their AI factory supercomputer set to ship later this year. The platform includes GPUs, CPUs, inference accelerators, networking ASICs, a data processing unit, and an Ethernet switch, all operating as a single system across different rack types. Key components include the Rubin GPU for training and inference, Vera CPU for orchestration, and the Groq 3 LPU for low-latency inference decoding. Networking is facilitated by NVLink 6, ConnextX-9, and Spectrum-6 switches, while the BlueField-4 DPU handles networking and storage tasks. The Vera Rubin POD comprises five rack types delivering 60 exaflops of compute, with products expected to launch in the second half of 2026.