Back to home
Technology

Ultra Ethernet: The data-center interconnection of tomorrow detailed

Source

Tom's Hardware

Published

TL;DR

AI Generated

A new data-center connectivity standard, Ultra Ethernet 1.0.1, has been developed by the Ultra Ethernet Consortium, including Meta, Microsoft, and Oracle, to address the limitations of traditional Ethernet in hyperscale and exascale clusters. This next-generation standard aims to provide low latency, high bandwidth networking over standard Ethernet and IP infrastructure. Ultra Ethernet introduces a new architecture for connectionless communication, improving scalability and performance for AI and HPC workloads. The technology features layers such as Physical, Link, Transport, Storage, Management, and Software, enhancing data transfer efficiency and network orchestration. While still in early stages, Ultra Ethernet is set to revolutionize networking for large-scale data centers.

Read Full Article

Similar Articles

Broadcom to supply Meta with custom silicon through 2029 — Broadom CEO Hock Tan departs Meta's board

Broadcom to supply Meta with custom silicon through 2029 — Broadom CEO Hock Tan departs Meta's board

Broadcom and Meta have extended their partnership with a deal for Broadcom to supply Meta with custom-designed AI processors through 2029, including Meta Training and Inference Accelerator (MTIA) hardware. This agreement involves the supply of hundreds of thousands of AI processors and will consume multiple gigawatts of power. Broadcom will also provide Meta with Ethernet networking solutions. Broadcom CEO Hock Tan will step down from Meta's board to avoid a conflict of interest but will continue to guide Meta's custom silicon roadmap. The partnership aims to enhance Meta's computing capabilities for delivering personal superintelligence to billions of users.

Tom's Hardware
IPv6 usage reaches historic 50% across Google services, matching IPv4 — increased usage eases pressure on the IPv4 address market as 'new' protocol designed in 1998 finally hits its stride

IPv6 usage reaches historic 50% across Google services, matching IPv4 — increased usage eases pressure on the IPv4 address market as 'new' protocol designed in 1998 finally hits its stride

IPv6 usage has reached a historic 50% across Google services, matching IPv4, which eases pressure on the IPv4 address market. The IPv6 protocol, designed in 1998, has finally gained significant traction, with 43% of the world using it. The exhaustion of IPv4 addresses due to the rapid growth of internet-connected devices has led to increased adoption of IPv6. Despite some technical misconceptions, IPv6 offers benefits like faster connectivity and simplified networking.

Tom's Hardware
SemiEngineering

Silicon Photonics Lights The Way To More Efficient Data Centers

Silicon photonics is paving the way for more efficient data centers by potentially increasing bandwidth density and reducing power consumption, particularly driven by AI workloads. However, challenges such as process compatibility, thermal issues, and mechanical stress arise due to the use of various materials in photonic interconnects. Integrated electro-optical I/O modules are the desired outcome, but design and process complexities need to be addressed. The article delves into the technical aspects of silicon photonics, including the components involved, such as light sources, modulators, waveguides, and photodetectors, as well as the challenges of integrating optical and electronic components for efficient data transmission.

SemiEngineering
SemiEngineering

AI Workloads Are Turning The Data Center Network Into A Combined Memory And Storage Fabric

AI inference workloads are transforming data center architecture by integrating the network into a combined memory and storage fabric. This shift is driven by the increasing dominance of inference workloads over traditional microservices and client-server interactions. The classic data center design is evolving to accommodate the structured, server-server communication patterns of AI training and the sustained memory and storage traffic of inference workloads. As AI inference becomes the primary workload, network performance will be crucial for efficient access to distributed memory and storage resources. The data center network is no longer just a communication layer but a critical component defining AI performance.

SemiEngineering

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.