Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyond human abilities. Nowadays, DNNs are widely used in many Artificial Intelligence (AI) applications such as computer vision, natural language processing and autonomous driving. However, these incredible performance come at a high computational cost, requiring complex hardware platforms. Therefore, the need for dedicated hardware accelerators able to drastically speed up the execution by preserving a low-power attitude arise. This paper presents innovative techniques able to tackle matrix sparsity in convolutional DNNs due to non-linear activation functions. Developed architectures allow to skip unnecessary operations, like zero multiplications, without sacrificing accuracy or throughput and improving the energy efficiency. Such improvement could enhance the performance of embedded limited-budget battery applications, where cost-effective hardware, accuracy and duration are critical to expanding the deployment of AI.

Low-Power Hardware Accelerator for Sparse Matrix Convolution in Deep Neural Network / Anzalone, Erik; Capra, Maurizio; Peloso, Riccardo; Martina, Maurizio; Masera, Guido (SMART INNOVATION, SYSTEMS AND TECHNOLOGIES). - In: Progresses in Artificial Intelligence and Neural SystemsELETTRONICO. - [s.l] : Springer, 2020. - ISBN 978-981-15-5092-8. - pp. 79-89 [10.1007/978-981-15-5093-5_8]

Low-Power Hardware Accelerator for Sparse Matrix Convolution in Deep Neural Network

Capra, Maurizio;Peloso, Riccardo;Martina, Maurizio;Masera, Guido
2020

Abstract

Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyond human abilities. Nowadays, DNNs are widely used in many Artificial Intelligence (AI) applications such as computer vision, natural language processing and autonomous driving. However, these incredible performance come at a high computational cost, requiring complex hardware platforms. Therefore, the need for dedicated hardware accelerators able to drastically speed up the execution by preserving a low-power attitude arise. This paper presents innovative techniques able to tackle matrix sparsity in convolutional DNNs due to non-linear activation functions. Developed architectures allow to skip unnecessary operations, like zero multiplications, without sacrificing accuracy or throughput and improving the energy efficiency. Such improvement could enhance the performance of embedded limited-budget battery applications, where cost-effective hardware, accuracy and duration are critical to expanding the deployment of AI.
2020
978-981-15-5092-8
978-981-15-5093-5
Progresses in Artificial Intelligence and Neural Systems
File in questo prodotto:
File Dimensione Formato  
conv_accel.pdf

Open Access dal 11/07/2021

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 3.98 MB
Formato Adobe PDF
3.98 MB Adobe PDF Visualizza/Apri
Wirn2019_Progresses in Artificial Intelligence and Neural Systems_Capra.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 298.43 kB
Formato Adobe PDF
298.43 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2847352