Back to home
Technology

Heterogeneous Multicore System IP

Source

SemiEngineering

Published

TL;DR

AI Generated

The blog discusses the use of heterogeneous multicore systems in embedded applications to meet performance requirements across various workloads while reducing energy and area costs. It highlights a system architecture example using RISC-V Host CPU, Cadence IP, Xtensa DSPs, and Janus Network-on-Chip (NoC). The blog explains the benefits of using a heterogeneous architecture, selecting different ISAs, optimizing power-saving features, designing interconnects with NoC, data sharing, shared system memory, runtime environments, boot-up processes, offload engines, dynamic kernel loading, and optimized compilation. It also covers development platforms like SystemC simulation and FPGA emulation for architecture exploration and verification.

Read Full Article

Similar Articles

Disaggregating LLM Inference: Inside the SambaNova Intel Heterogeneous Compute Blueprint

Disaggregating LLM Inference: Inside the SambaNova Intel Heterogeneous Compute Blueprint

SambaNova Systems and Intel have introduced a blueprint for heterogeneous inference that optimizes modern large language model (LLM) workloads by utilizing specialized hardware for different phases of inference: GPUs for prefill, SambaNova RDUs for decode, and Intel Xeon 6 CPUs for agentic tools and orchestration. This approach addresses the complexity of agentic AI systems with varying compute demands. By isolating tasks onto specific hardware, the architecture improves efficiency, scalability, and cost-effectiveness. The design reflects a shift towards specialized compute fabrics and better supports the evolving landscape of AI reasoning systems.

SemiWiki
Intel and SambaNova team up on heterogenous AI inference platform — different hardware performs different workloads

Intel and SambaNova team up on heterogenous AI inference platform — different hardware performs different workloads

Intel and SambaNova have collaborated on a new heterogeneous inference platform that utilizes different hardware components for various AI workloads. The platform leverages AI accelerators or GPUs for prefill, SambaNova's SN50 RDU for decoding, and Xeon 6 processors for agent-related operations and workload distribution. This architecture aims to compete with Nvidia by offering a scalable solution for enterprises and cloud operators, set to be available in the second half of 2026. The collaboration emphasizes the performance benefits of Xeon 6 processors and their compatibility with existing data center infrastructures.

Tom's Hardware
Intel's upcoming Wildcat Lake low-budget CPUs leak out again — OEM confirms specs for Core 7 350, Core 5 320, & Core 3 305 in first retail product datasheet

Intel's upcoming Wildcat Lake low-budget CPUs leak out again — OEM confirms specs for Core 7 350, Core 5 320, & Core 3 305 in first retail product datasheet

Intel's Wildcat Lake CPUs, including Core 7 350, Core 5 320, and Core 3 305, have been leaked by an OEM, Advantech, in the first retail product datasheet. These CPUs are designed for embedded and edge use cases with a 15W TDP that can potentially reach 25W in PL2. The Wildcat Lake series is expected to feature the Cougar Cove architecture for P-cores and Darkmont for LP-E cores, aligning it with the Core 300 family. The leak suggests a launch is imminent, offering efficient low-power CPUs that could compete with products like the MacBook Neo.

Tom's Hardware
Embedded World 2026: Boards and Modules (Part 3)

Embedded World 2026: Boards and Modules (Part 3)

The article discusses five new embedded modules showcased at Embedded World 2026 that focus on edge AI, industrial IoT, and embedded designs. Each module offers unique features such as NPUs, connectivity options, software support, and compact form factors. These modules cater to applications like smart vision, human-machine interface, smart retail, robotics, and medical systems. Some highlighted modules include the Variscite VAR-SMARC-MX8M-PLUS, congatec conga-SMX95/aReady.COM conga-SMX95, SECO SOM-Trizeps-X-Genio360/360P, Grinn GenioSOM-360, and Octavo OSD62x-PM. These modules aim to bring advanced AI capabilities to various industries with their cutting-edge technologies and compact designs.

ElectronicDesign

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.