Back to home
Technology

Passt – Plug a Simple Socket Transport

Source

Hacker News

Published

TL;DR

AI Generated

"Passt" and "Pasta" are tools that provide a translation layer between network interfaces and Layer-4 sockets without requiring special privileges, serving as replacements for Slirp. Passt focuses on pretending processes are running locally, while Pasta enables connecting namespaces without creating additional interfaces. Both tools support TCP, UDP, and ICMP/ICMPv6 echo protocols. They offer features like ARP proxy, DHCP server, and NDP proxy, with minimalistic implementations. Users can build from source or use available packages to run these tools. Continuous integration, performance, and security features are also highlighted.

Read Full Article

Similar Articles

Broadcom to supply Meta with custom silicon through 2029 — Broadom CEO Hock Tan departs Meta's board

Broadcom to supply Meta with custom silicon through 2029 — Broadom CEO Hock Tan departs Meta's board

Broadcom and Meta have extended their partnership with a deal for Broadcom to supply Meta with custom-designed AI processors through 2029, including Meta Training and Inference Accelerator (MTIA) hardware. This agreement involves the supply of hundreds of thousands of AI processors and will consume multiple gigawatts of power. Broadcom will also provide Meta with Ethernet networking solutions. Broadcom CEO Hock Tan will step down from Meta's board to avoid a conflict of interest but will continue to guide Meta's custom silicon roadmap. The partnership aims to enhance Meta's computing capabilities for delivering personal superintelligence to billions of users.

Tom's Hardware
IPv6 usage reaches historic 50% across Google services, matching IPv4 — increased usage eases pressure on the IPv4 address market as 'new' protocol designed in 1998 finally hits its stride

IPv6 usage reaches historic 50% across Google services, matching IPv4 — increased usage eases pressure on the IPv4 address market as 'new' protocol designed in 1998 finally hits its stride

IPv6 usage has reached a historic 50% across Google services, matching IPv4, which eases pressure on the IPv4 address market. The IPv6 protocol, designed in 1998, has finally gained significant traction, with 43% of the world using it. The exhaustion of IPv4 addresses due to the rapid growth of internet-connected devices has led to increased adoption of IPv6. Despite some technical misconceptions, IPv6 offers benefits like faster connectivity and simplified networking.

Tom's Hardware
Intel developing two-lever retention mechanism for LGA 1954 socket, according to new leak — Premium Nova Lake-S motherboards will feature 2L-ILM sockets

Intel developing two-lever retention mechanism for LGA 1954 socket, according to new leak — Premium Nova Lake-S motherboards will feature 2L-ILM sockets

Intel is reportedly working on a new two-lever retention mechanism for the LGA 1954 socket, set to be featured in the Premium Nova Lake-S motherboards. This mechanism, named "2L-ILM," aims to enhance cooling performance by ensuring better thermal contact between the CPU's integrated heat spreader (IHS) and the heatsink. The design with two levers covering the entire perimeter of the socket is intended to prevent hotspots and potential CPU bending due to uneven contact pressure. This development signifies Intel's focus on refining even minor details for its upcoming Nova Lake-S CPU lineup, emphasizing the importance of thermal management in high-performance computing.

Tom's Hardware
SemiEngineering

AI Workloads Are Turning The Data Center Network Into A Combined Memory And Storage Fabric

AI inference workloads are transforming data center architecture by integrating the network into a combined memory and storage fabric. This shift is driven by the increasing dominance of inference workloads over traditional microservices and client-server interactions. The classic data center design is evolving to accommodate the structured, server-server communication patterns of AI training and the sustained memory and storage traffic of inference workloads. As AI inference becomes the primary workload, network performance will be crucial for efficient access to distributed memory and storage resources. The data center network is no longer just a communication layer but a critical component defining AI performance.

SemiEngineering

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.