- Témaindító
- #1
- Csatlakozás
- 2024.09.10.
- Üzenetek
- 43,663
- Reakció pontszám
- 8
- Díjak
- 5
- Kor
- 37
Free Download Accelerators for Convolutional Neural Networks by Arslan Munir, Joonho Kong, Mahmood Azhar Qureshi
English | October 31st, 2023 | ISBN: 1394171889 | 304 pages | True EPUB | 22.53 MB
Accelerators for Convolutional Neural Networks
Comprehensive and thorough resource exploring different types of convolutional neural networks and complementary accelerators
Accelerators for Convolutional Neural Networks provides basic deep learning knowledge and instructive content to build up convolutional neural network (CNN) accelerators for the Internet of things (IoT) and edge computing practitioners, elucidating compressive coding for CNNs, presenting a two-step lossless input feature maps compression method, discussing arithmetic coding -based lossless weights compression method and the design of an associated decoding method, describing contemporary sparse CNNs that consider sparsity in both weights and activation maps, and discussing hardware/software co-design and co-scheduling techniques that can lead to better optimization and utilization of the available hardware resources for CNN acceleration.
The first part of the book provides an overview of CNNs along with the composition and parameters of different contemporary CNN models. Later chapters focus on compressive coding for CNNs and the design of dense CNN accelerators. The book also provides directions for future research and development for CNN accelerators.
Other sample topics covered in Accelerators for Convolutional Neural Networks include:
- How to apply arithmetic coding and decoding with range scaling for lossless weight compression for 5-bit CNN weights to deploy CNNs in extremely resource-constrained systems
- State-of-the-art research surrounding dense CNN accelerators, which are mostly based on systolic arrays or parallel multiply-accumulate (MAC) arrays
- iMAC dense CNN accelerator, which combines image-to-column (im2col) and general matrix multiplication (GEMM) hardware acceleration
- Multi-threaded, low-cost, log-based processing element (PE) core, instances of which are stacked in a spatial grid to engender NeuroMAX dense accelerator
- Sparse-PE, a multi-threaded and flexible CNN PE core that exploits sparsity in both weights and activation maps, instances of which can be stacked in a spatial grid for engendering sparse CNN accelerators
Buy Premium From My Links To Get Resumable Support,Max Speed & Support Me
Code:
⚠
A kód megtekintéséhez jelentkezz be.
Please log in to view the code.