Back to home
Technology

Desktop GPU roadmap: Nvidia Rubin, AMD UDNA & Intel Xe3 Celestial

Source

Tom's Hardware

Published

TL;DR

AI Generated

The article discusses the latest developments in desktop GPU technology from Nvidia, AMD, and Intel. Nvidia's upcoming Rubin architecture is expected to debut in late 2026, featuring significant improvements in transistor density and power efficiency. AMD is transitioning to its UDNA architecture, aiming for a unified design targeting both gaming and compute workloads, expected to launch in late 2026. Intel's Xe3 Celestial architecture is progressing towards volume production, likely to debut in early 2027. Each company is focusing on advancements in AI acceleration, memory technologies, and architectural enhancements to push the boundaries of desktop GPU capabilities in the coming years.

Read Full Article

Similar Articles

Nvidia CEO Huang says upcoming DGX Spark systems are powered by N1 silicon — confirms GB10 Superchip and N1/N1X SoCs are identical

Nvidia CEO Huang says upcoming DGX Spark systems are powered by N1 silicon — confirms GB10 Superchip and N1/N1X SoCs are identical

Nvidia CEO Jensen Huang confirmed that the upcoming N1 SoC is essentially the same as the existing GB10 Superchip, part of the DGX Spark lineup, designed for AI workloads. The N1/N1X SoCs were speculated upon following Nvidia's Project DIGITS announcement in collaboration with MediaTek. The N1 SoC is expected to feature 6,144 CUDA cores for its GPU and a 20-core CPU built using Nvidia's Grace architecture. Huang's statement suggests that the N1 and GB10 are closely linked, with the N1 possibly being a lower-binned version of the GB10. The N1's development is part of Nvidia's move towards mainstream CPU cores following Tegra, and its collaboration with Intel for ARM-based products is not expected to impact its roadmap.

Tom's Hardware
Nvidia Rubin CPX die shot reveals graphics-specific hardware blocks not needed for an AI GPU — Rubin CPX may form the foundation of next-gen RTX 6090

Nvidia Rubin CPX die shot reveals graphics-specific hardware blocks not needed for an AI GPU — Rubin CPX may form the foundation of next-gen RTX 6090

Nvidia's Rubin CPX GPU, initially designed for AI tasks, may actually contain graphics-specific hardware blocks like Raster Output Pipelines and display engines, leading to speculation that it could lay the groundwork for the next-gen RTX 6090. The die shot of Rubin CPX reveals 16 Graphics Processing Clusters and 256 Raster Output Pipelines, potentially offering significant performance gains over the RTX 5090. If repurposed for gaming, Rubin CPX could deliver around 28,672 CUDA cores and 224 ROPs, indicating a notable performance uplift. The inclusion of a 512-bit memory bus, GDDR7 support, and PCIe 6.0 hints at the possibility of Rubin CPX being a stepping stone for future GPU advancements. The release of Rubin CPX is expected at the end of 2026, with the RTX 6090 possibly being announced at CES 2027.

Tom's Hardware
NVIDIA Rubin CPX GPU to feature 128GB GDDR7 memory, launches end of 2026

NVIDIA Rubin CPX GPU to feature 128GB GDDR7 memory, launches end of 2026

NVIDIA's upcoming Rubin CPX GPU is set to launch by the end of 2026 and will boast 128GB of GDDR7 memory. The GPU is expected to offer significant performance improvements for gaming and professional applications. This release is anticipated to push the boundaries of graphics processing power and memory capacity in the tech industry.

TweakTown
Nvidia Rubin CPX forms one half of new, "disaggregated" AI inference architecture — approach splits work between compute- and bandwidth-optimized chips for best performance

Nvidia Rubin CPX forms one half of new, "disaggregated" AI inference architecture — approach splits work between compute- and bandwidth-optimized chips for best performance

Nvidia introduces the Rubin CPX GPU, part of a new "disaggregated" AI inference architecture that optimizes performance by splitting work between compute- and bandwidth-optimized chips. The Rubin CPX GPU is designed for compute-intensive tasks, while the standard Rubin GPU handles memory-bandwidth-limited tasks in AI inference. The Rubin CPX GPU offers 30 petaFLOPs of raw compute performance and 128 GB of GDDR7 memory, while the Vera Rubin NVL144 CPX rack, containing both Rubin GPUs, Vera CPUs, and high-speed memory, is expected to deliver 8 exaFLOPs NVFP4. Nvidia predicts significant revenue potential from AI systems utilizing the Rubin CPX GPU and plans to showcase the technology at GTC 2026.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.