Study Of HW Acceleration for Neural Networks (Arizona State Univ.)
Source
Published
TL;DR
AI GeneratedResearchers at Arizona State University published a technical paper titled “Hardware Acceleration for Neural Networks: A Comprehensive Survey,” highlighting the challenges faced by neural networks due to hardware bottlenecks like memory movement and communication. The paper reviews various hardware acceleration technologies for deep learning, including GPUs, TPUs, FPGAs, ASICs, and emerging accelerators like LPUs. It categorizes these technologies based on workloads, execution settings, and optimization levers, discussing architectural ideas such as systolic arrays and specialized kernels. The paper also addresses open challenges and future directions for efficient neural network acceleration.