We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Study Of HW Acceleration for Neural Networks (Arizona State Univ.)

Source

SemiEngineering

Published

TL;DR

AI Generated

Researchers at Arizona State University published a technical paper titled “Hardware Acceleration for Neural Networks: A Comprehensive Survey,” highlighting the challenges faced by neural networks due to hardware bottlenecks like memory movement and communication. The paper reviews various hardware acceleration technologies for deep learning, including GPUs, TPUs, FPGAs, ASICs, and emerging accelerators like LPUs. It categorizes these technologies based on workloads, execution settings, and optimization levers, discussing architectural ideas such as systolic arrays and specialized kernels. The paper also addresses open challenges and future directions for efficient neural network acceleration.