Back to home
Technology

CISCO ASIC Success with Synopsys SLM IPs

Source

SemiWiki

Published

TL;DR

AI Generated

Cisco has been facing challenges in ensuring reliability, performance, and power efficiency in its high-performance networking silicon due to increasing transistor densities. To address this, Cisco has adopted Synopsys Silicon Lifecycle Management (SLM) IPs in its latest Silicon One ASICs, incorporating embedded monitors and analytics capabilities for real-time observability. The deployment of Synopsys SLM IPs, including the Process, Voltage, and Temperature Monitor subsystem, has enabled Cisco to optimize power and performance dynamically based on immediate conditions. By leveraging these IPs, Cisco has achieved enhanced observability, proactive reliability management, and improved lifecycle optimization for its ASIC designs, setting a new benchmark in the networking domain.

Read Full Article

Similar Articles

Broadcom to supply Meta with custom silicon through 2029 — Broadom CEO Hock Tan departs Meta's board

Broadcom to supply Meta with custom silicon through 2029 — Broadom CEO Hock Tan departs Meta's board

Broadcom and Meta have extended their partnership with a deal for Broadcom to supply Meta with custom-designed AI processors through 2029, including Meta Training and Inference Accelerator (MTIA) hardware. This agreement involves the supply of hundreds of thousands of AI processors and will consume multiple gigawatts of power. Broadcom will also provide Meta with Ethernet networking solutions. Broadcom CEO Hock Tan will step down from Meta's board to avoid a conflict of interest but will continue to guide Meta's custom silicon roadmap. The partnership aims to enhance Meta's computing capabilities for delivering personal superintelligence to billions of users.

Tom's Hardware
IPv6 usage reaches historic 50% across Google services, matching IPv4 — increased usage eases pressure on the IPv4 address market as 'new' protocol designed in 1998 finally hits its stride

IPv6 usage reaches historic 50% across Google services, matching IPv4 — increased usage eases pressure on the IPv4 address market as 'new' protocol designed in 1998 finally hits its stride

IPv6 usage has reached a historic 50% across Google services, matching IPv4, which eases pressure on the IPv4 address market. The IPv6 protocol, designed in 1998, has finally gained significant traction, with 43% of the world using it. The exhaustion of IPv4 addresses due to the rapid growth of internet-connected devices has led to increased adoption of IPv6. Despite some technical misconceptions, IPv6 offers benefits like faster connectivity and simplified networking.

Tom's Hardware
Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology

Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology

Alchip Technologies is making significant progress in developing advanced 2nm ASICs, positioning itself as a leader in semiconductor design for AI and HPC applications. The company's efforts focus on commercializing cutting-edge chip technologies for data centers, hyperscalers, and AI infrastructure providers. Alchip has created a dedicated 2nm design platform that supports advanced packaging and chiplet integration methods, enabling high-performance ASIC development. The transition to 2nm technology introduces nanosheet or GAA transistors, offering better performance and power efficiency for AI workloads and data centers. Alchip's successful 2nm test chip tape-out validates their design methodology and readiness for emerging packaging approaches, positioning them for future semiconductor generations.

SemiWiki
SemiEngineering

AI Workloads Are Turning The Data Center Network Into A Combined Memory And Storage Fabric

AI inference workloads are transforming data center architecture by integrating the network into a combined memory and storage fabric. This shift is driven by the increasing dominance of inference workloads over traditional microservices and client-server interactions. The classic data center design is evolving to accommodate the structured, server-server communication patterns of AI training and the sustained memory and storage traffic of inference workloads. As AI inference becomes the primary workload, network performance will be crucial for efficient access to distributed memory and storage resources. The data center network is no longer just a communication layer but a critical component defining AI performance.

SemiEngineering

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.