Back to home

Articles tagged with "AI, GPUs, Nvidia"

Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece

Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece

Nvidia has begun sending out samples of its Vera Rubin platform, designed for next-gen AI data centers, to select customers. The platform includes an 88-core Vera CPU paired with Rubin GPUs boasting 288 GB of HBM4 memory each. Partners are expected to prepare for deployment in the second half of 2026 or early 2027. The Vera Rubin platform also features other components like the Rubin CPX GPU, NVLink 6.0 switch ASIC, BlueField-4 DPU, and more, aimed at enhancing AI data center capabilities. Nvidia's partners will receive different parts of the platform to ready their software and hardware stacks for integration.

Tom's Hardware
TikTok owner ByteDance to reportedly purchase $14 billion worth of Nvidia AI GPUs in 2026 — Company betting on Beijing's approval following Trump admin's ease on AI export controls

TikTok owner ByteDance to reportedly purchase $14 billion worth of Nvidia AI GPUs in 2026 — Company betting on Beijing's approval following Trump admin's ease on AI export controls

ByteDance, the company behind TikTok, plans to invest $14 billion in Nvidia's H200 AI GPUs in 2026, following a significant increase in spending on Nvidia chips in 2025. Despite restrictions, ByteDance is looking to purchase these GPUs after the Trump administration eased AI export controls. China's push for self-reliance in technology has led ByteDance to develop custom AI GPUs with Broadcom and TSMC. While ByteDance is working on its own chips, Nvidia's GPUs remain crucial for AI training workloads.

Tom's Hardware
Two GTX 580s in SLI are responsible for the AI we have today — Nvidia's Huang revealed that the invention of deep learning began with two flagship Fermi GPUs in 2012

Two GTX 580s in SLI are responsible for the AI we have today — Nvidia's Huang revealed that the invention of deep learning began with two flagship Fermi GPUs in 2012

Nvidia CEO Jensen Huang revealed that the inception of deep learning, pivotal for AI development, traces back to researchers using two GTX 580 GPUs in SLI in 2012. The researchers at the University of Toronto crafted AlexNet, a groundbreaking image recognition architecture with self-learning capabilities, outperforming existing algorithms by over 70%. This development marked Nvidia's entry into AI hardware, leading to subsequent innovations like the original Nvidia DGX and Volta architecture. The use of GTX 580s for deep learning laid the foundation for Nvidia's significant investments in AI technology, shaping its current standing in the industry.

Tom's Hardware

No more articles to load

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.