Efficient inference is critical in realizing a lowpower, real-time implementation of convolutional neural networks (CNNs) on compute and memory-constrained embedded platforms. Using quantization techniques and fast convolutional algorithms like Winograd, CNN inference can achieve benefits in latency and in energy consumption. Performing Winograd convolution involves (1) transforming the weights and activations to the Winograd domain, (2) performing element-wise multiplication on the transformed tensors, and (3) transforming the results back to the conventional spatial domain. Combining Winograd with quantization of all its steps results in severe accuracy degradation due to numerical instability. In this paper we propose a simple quantization-aware training technique, which quantizes all three steps of the Winograd convolution, while using a minimal number of scaling factors. Additionally, we propose an FPGA accelerator employing tiling and unrolling methods to highlight the performance benefits of using the full 8-bit quantized Winograd algorithm. We achieve 2× reduction in inference time compared to standard convolution on ResNet-18 for the ImageNet dataset, while improving the Top-1 accuracy by 55.7 p.p. compared to a standard post-training quantized Winograd variant of the network.

WinoTrain: Winograd-Aware Training for Accurate Full 8-bit Convolution Acceleration / Mori, Pierpaolo; Sampath, Shambhavi-Balamuthu; Frickenstein, Lukas; Vemparala, Manoj-Rohit; Fasfous, Nael; Frickenstein, Alexander; Stechele, Walter; Passerone, Claudio. - ELETTRONICO. - (2023), pp. 1-6. (Intervento presentato al convegno Proceedings of the 60th ACM/IEEE Design Automation Conference tenutosi a San Francisco, CA, USA nel 09-13 July 2023) [10.1109/DAC56929.2023.10247805].

WinoTrain: Winograd-Aware Training for Accurate Full 8-bit Convolution Acceleration

Mori, Pierpaolo;Passerone, Claudio
2023

Abstract

Efficient inference is critical in realizing a lowpower, real-time implementation of convolutional neural networks (CNNs) on compute and memory-constrained embedded platforms. Using quantization techniques and fast convolutional algorithms like Winograd, CNN inference can achieve benefits in latency and in energy consumption. Performing Winograd convolution involves (1) transforming the weights and activations to the Winograd domain, (2) performing element-wise multiplication on the transformed tensors, and (3) transforming the results back to the conventional spatial domain. Combining Winograd with quantization of all its steps results in severe accuracy degradation due to numerical instability. In this paper we propose a simple quantization-aware training technique, which quantizes all three steps of the Winograd convolution, while using a minimal number of scaling factors. Additionally, we propose an FPGA accelerator employing tiling and unrolling methods to highlight the performance benefits of using the full 8-bit quantized Winograd algorithm. We achieve 2× reduction in inference time compared to standard convolution on ResNet-18 for the ImageNet dataset, while improving the Top-1 accuracy by 55.7 p.p. compared to a standard post-training quantized Winograd variant of the network.
2023
979-8-3503-2348-1
File in questo prodotto:
File Dimensione Formato  
WinoTrain_Winograd-Aware_Training_for_Accurate_Full_8-bit_Convolution_Acceleration.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.33 MB
Formato Adobe PDF
1.33 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
_Accept__WinoTrain__Winograd_Aware_Training_for_Accurate_Full_8_bit_Convolution_Acceleration.pdf

non disponibili

Descrizione: post-print
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 629.36 kB
Formato Adobe PDF
629.36 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2982761