Back to home
Technology

Designing CPUs for next-generation supercomputing

Source

MIT Technology Review

Published

TL;DR

AI Generated

Despite the hype around GPU-powered AI breakthroughs, CPUs remain crucial for high-performance computing, supporting the majority of scientific, engineering, and research workloads. Recent innovations in CPU technology, such as high-bandwidth memory (HBM), are leading to significant performance improvements without the need for costly architectural changes. Evan Burness from Microsoft Azure estimates that CPUs still handle 80% to 90% of HPC simulation jobs. This resurgence in CPU development is highlighted in a new report by MIT Technology Review's custom content arm, emphasizing the ongoing importance of CPUs in next-generation supercomputing.

Read Full Article

Similar Articles

China announces CPU-only exascale supercomputer with 47,000 homemade processors, record 2 Exaflops of performance without GPUs — Lingshen super said to use Huawei Kunpeng servers and no foreign-made components

China announces CPU-only exascale supercomputer with 47,000 homemade processors, record 2 Exaflops of performance without GPUs — Lingshen super said to use Huawei Kunpeng servers and no foreign-made components

China's National Supercomputing Center in Shenzhen unveiled the Lingshen supercomputer project, aiming for over 2 ExaFLOPS performance using 47,000 homemade processors without GPUs or foreign components. The system, designed to surpass the current fastest supercomputer, El Capitan, would utilize Huawei Kunpeng servers and Arm-based Taishan cores. The project includes a pilot phase with 100 servers and a full production system with 1,580 blade servers. While China's claims of achieving 2+ ExaFLOPS are ambitious, questions remain about the feasibility of surpassing existing supercomputing benchmarks without GPUs or foreign-made CPUs.

Tom's Hardware
Nvidia DGX Spark review: the GB10 Superchip powers a fast and fun AI toolbox that beats out AMD’s Ryzen AI Max+ 395

Nvidia DGX Spark review: the GB10 Superchip powers a fast and fun AI toolbox that beats out AMD’s Ryzen AI Max+ 395

The Nvidia DGX Spark, powered by the GB10 Superchip, offers a high-performance Arm CPU and Blackwell GPU combo with full support for the CUDA ecosystem. The GB10 SoC features a MediaTek-produced Arm CPU complex and a Blackwell GPU on one package, both fabricated on a TSMC 3nm-class node. With a coherent 128GB pool of LPDDR5X memory, the Spark is suitable for various AI workloads. The mini PC design includes ports like USB-C, HDMI, 10Gb Ethernet, and QSFP for onboard ConnectX 7 NIC, allowing clustering for distributed computing experiments. Nvidia offers customization options through system partners like Dell, Acer, and HP, catering to corporate and institutional IT needs. The Spark integrates easily into existing workflows with preinstalled DGX OS and tools like Nvidia Sync for remote access and management.

Tom's Hardware
Elon Musk's xAI Colossus 2 is nowhere near 1 gigawatt capacity, satellite imagery suggests — despite claims, site only has 350 megawatts of cooling capacity

Elon Musk's xAI Colossus 2 is nowhere near 1 gigawatt capacity, satellite imagery suggests — despite claims, site only has 350 megawatts of cooling capacity

Elon Musk's xAI Colossus 2 data center, despite Musk's claims, is not yet at a 1 gigawatt capacity as suggested by satellite imagery. The facility currently only has 350 megawatts of cooling capacity, not enough for its advertised 550,000 Nvidia Blackwell AI accelerators. The supercomputer, codenamed 'Macrohard,' is expected to reach 1 GW by May, with ongoing equipment upgrades. While Musk hinted at potential future scaling to 1.5 GW or even 2 GW, the current focus is on acquiring more AI servers, power, and cooling systems. Despite the delay in reaching the 1 GW milestone, xAI's Colossus 2 is projected to surpass rival AI data centers in resources for AI training and inference.

Tom's Hardware
Elon Musk restarts Dojo3 'space' supercomputer project as AI5 chip design gets in 'good shape' — will be first Tesla-built supercomputer to feature all-in-house hardware, with no help from Nvidia

Elon Musk restarts Dojo3 'space' supercomputer project as AI5 chip design gets in 'good shape' — will be first Tesla-built supercomputer to feature all-in-house hardware, with no help from Nvidia

Elon Musk has announced the revival of Tesla's Dojo3 supercomputer project, driven by the success of the AI5 chip design. This supercomputer will be the first from Tesla to solely feature in-house hardware, without Nvidia's assistance. Musk is hiring more staff to work on the chips for Dojo3, which will utilize the AI5/AI6 or AI7 chips as part of Tesla's new nine-month chip release cycle. The Dojo3 project aligns with Musk's vision for advanced AI computing, potentially making it Tesla's most successful supercomputer yet, with applications in space-based AI computation.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.