We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Optimizing In-Memory AI Accelerators Across Multiple Workloads (KAUST, Compumacy)

Source

SemiEngineering

Published

TL;DR

AI Generated

Researchers from KAUST and Compumacy for Artificial Intelligence Solutions have developed a framework called “Joint Hardware-Workload Co-Optimization for In-Memory Computing Accelerators” to optimize in-memory computing (IMC) hardware accelerators for neural networks. This framework focuses on designing generalized IMC accelerator architectures that can efficiently support multiple neural network workloads, rather than specialized designs for individual models. By using an optimized evolutionary algorithm, the framework reduces the performance gap between workload-specific and generalized IMC designs. The evaluation of the framework on RRAM- and SRAM-based IMC architectures shows significant energy-delay-area product (EDAP) reductions of up to 76.2% and 95.5% when optimizing across different sets of workloads. The source code of the framework is available for further exploration.