Back to home
Technology

Multi-Core Architecture Optimized For Time-Predictable Neural Network Inference (FZI, KIT)

Source

SemiEngineering

Published

TL;DR

AI Generated

Researchers from FZI Research Center for Information Technology and Karlsruhe Institute for Information Technology (KIT) have published a technical paper on a new architecture called "MultiVic" optimized for neural network inference. The architecture features a multi-core vector processor with predictable cores and local scratchpad memories, managed by a central core for shared memory access. Different design variants were evaluated, showing that configurations with more smaller cores outperformed a baseline single-core vector processor in terms of performance and time predictability. This architecture aims to address the need for high-performance hardware with predictable timing behavior in real-time systems utilizing neural networks.

Read Full Article

Similar Articles

SemiEngineering

Heterogeneous Multicore System IP

The blog discusses the use of heterogeneous multicore systems in embedded applications to meet performance requirements across various workloads while reducing energy and area costs. It highlights a system architecture example using RISC-V Host CPU, Cadence IP, Xtensa DSPs, and Janus Network-on-Chip (NoC). The blog explains the benefits of using a heterogeneous architecture, selecting different ISAs, optimizing power-saving features, designing interconnects with NoC, data sharing, shared system memory, runtime environments, boot-up processes, offload engines, dynamic kernel loading, and optimized compilation. It also covers development platforms like SystemC simulation and FPGA emulation for architecture exploration and verification.

SemiEngineering
Study Of HW Acceleration for Neural Networks (Arizona State Univ.)

Study Of HW Acceleration for Neural Networks (Arizona State Univ.)

Researchers at Arizona State University published a technical paper titled “Hardware Acceleration for Neural Networks: A Comprehensive Survey,” highlighting the challenges faced by neural networks due to hardware bottlenecks like memory movement and communication. The paper reviews various hardware acceleration technologies for deep learning, including GPUs, TPUs, FPGAs, ASICs, and emerging accelerators like LPUs. It categorizes these technologies based on workloads, execution settings, and optimization levers, discussing architectural ideas such as systolic arrays and specialized kernels. The paper also addresses open challenges and future directions for efficient neural network acceleration.

SemiEngineering
Researchers isolate memorization from reasoning in AI neural networks

Researchers isolate memorization from reasoning in AI neural networks

Researchers from Goodfire.ai have discovered that in AI language models like GPT-5, memorization and reasoning operate through separate neural pathways. By removing memorization pathways, models lost their ability to recite training data but retained logical reasoning skills. Surprisingly, arithmetic operations share neural pathways with memorization rather than reasoning, explaining AI models' struggles with math. This finding sheds light on how AI language models handle information and highlights the distinction between logical reasoning and mathematical reasoning in AI systems.

Ars Technica
Hacker News

VortexNet: Neural network based on fluid dynamics

The article introduces VortexNet, a neural network model inspired by fluid dynamics, with toy implementations available in a repository. These examples showcase the integration of PDE-based vortex layers and fluid-inspired mechanisms into neural architectures like autoencoders for various datasets. The provided scripts demonstrate building and training a VortexNet Autoencoder on the MNIST dataset and custom image datasets, offering features like data augmentation and latent space interpolation. Users can access the code repository, install dependencies, prepare data, and run the scripts for experimenting with VortexNet models.

Hacker News

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.