We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Nvidia CEO Jensen Huang explains why SRAM isn't here to eat HBM's lunch — high bandwidth memory offers more flexibility in AI deployments across a range of workloads

Source

Tom's Hardware

Published

TL;DR

AI Generated

Nvidia CEO Jensen Huang explains that while SRAM-heavy accelerators and cheaper memory are gaining traction, high bandwidth memory (HBM) offers more flexibility in AI deployments due to the constantly evolving nature of AI workloads. Huang emphasizes that while SRAM may offer speed advantages in certain scenarios, it lacks the capacity to handle modern AI models at scale. He argues that the unpredictability and variability of AI workloads necessitate a flexible hardware approach, which is why Nvidia continues to prioritize HBM despite its higher costs. Huang's comments underscore Nvidia's commitment to adaptability in the face of changing AI landscape, suggesting that specialized hardware may not be as effective in shared data centers where workload diversity is key.