Transformers' compute-intensive operations pose enormous challenges for their deployment in resource-constrained EdgeAI / tinyML devices. As an established neural network compression technique, quantization reduces the hardware computational and memory resources. In particular, fixed-point quantization is desirable to ease the computations using lightweight blocks, like adders and multipliers, of the underlying hardware. However, deploying fully-quantized Transformers on existing general-purpose hardware, generic AI accelerators, or specialized architectures for Transformers with floating-point units might be infeasible and/or inefficient. Towards this, we propose SwiftTron, an efficient specialized hardware accelerator designed for Quantized Transformers. SwiftTron supports the execution of different types of Transformers' operations (like Attention, Softmax, GELU, and Layer Normalization) and accounts for diverse scaling factors to perform correct computations. We synthesize the complete SwiftTron architecture in a 65 nm CMOS technology with the ASIC design flow. Our Accelerator executes the RoBERTa-base model in 1.83 ns, while consuming 33.64 mW power, and occupying an area of 273 mm(2). To ease the reproducibility, the RTL of our SwiftTron architecture is released at https: //github.com/albertomarchisio/SwiftTron.

SwiftTron: An Efficient Hardware Accelerator for Quantized Transformers / Marchisio, Alberto; Dura, Davide; Capra, Maurizio; Martina, Maurizio; Masera, Guido; Shafique, Muhammad. - ELETTRONICO. - (2023), pp. 1-9. (Intervento presentato al convegno nternational Joint Conference on Neural Networks (IJCNN) tenutosi a Gold Coast (Australia) nel 18-23 June 2023) [10.1109/ijcnn54540.2023.10191521].

SwiftTron: An Efficient Hardware Accelerator for Quantized Transformers

Martina, Maurizio;Masera, Guido;
2023

Abstract

Transformers' compute-intensive operations pose enormous challenges for their deployment in resource-constrained EdgeAI / tinyML devices. As an established neural network compression technique, quantization reduces the hardware computational and memory resources. In particular, fixed-point quantization is desirable to ease the computations using lightweight blocks, like adders and multipliers, of the underlying hardware. However, deploying fully-quantized Transformers on existing general-purpose hardware, generic AI accelerators, or specialized architectures for Transformers with floating-point units might be infeasible and/or inefficient. Towards this, we propose SwiftTron, an efficient specialized hardware accelerator designed for Quantized Transformers. SwiftTron supports the execution of different types of Transformers' operations (like Attention, Softmax, GELU, and Layer Normalization) and accounts for diverse scaling factors to perform correct computations. We synthesize the complete SwiftTron architecture in a 65 nm CMOS technology with the ASIC design flow. Our Accelerator executes the RoBERTa-base model in 1.83 ns, while consuming 33.64 mW power, and occupying an area of 273 mm(2). To ease the reproducibility, the RTL of our SwiftTron architecture is released at https: //github.com/albertomarchisio/SwiftTron.
2023
978-1-6654-8867-9
File in questo prodotto:
File Dimensione Formato  
SwiftTron_An_Efficient_Hardware_Accelerator_for_Quantized_Transformers.pdf

accesso riservato

Descrizione: versione editoriale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 3.1 MB
Formato Adobe PDF
3.1 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
2304.03986.pdf

accesso aperto

Descrizione: versione autore
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 1.6 MB
Formato Adobe PDF
1.6 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2987506