Rehearsal-based Continual Learning (CL) has been intensely investigated in Deep Neural Networks (DNNs). However, its application in Spiking Neural Networks (SNNs) has not been explored in depth. In this paper we introduce the first memory-efficient implementation of Latent Replay (LR)-based CL for SNNs, designed to seamlessly integrate with resource-constrained devices. LRs combine new samples with latent representations of previously learned data, to mitigate forgetting. Experiments on the Heidelberg SHD dataset with Sample and Class-Incremental tasks reach a Top-1 accuracy of 92.5% and 92%, respectively, without forgetting the previously learned information. Furthermore, we minimize the LRs' requirements by applying a time-domain compression, reducing by two orders of magnitude their memory requirement, with respect to a naïve rehearsal setup, with a maximum accuracy drop of 4%. On a Multi-Class-Incremental task, our SNN learns 10 new classes from an initial set of 10, reaching a Top-1 accuracy of 78.4% on the full test set 1 1 To encourage research in this field, we release the code related to our experiments as open-source: https://github.com/Dequino/Spiking-Compressed-Continual-Learning.

Compressed Latent Replays for Lightweight Continual Learning on Spiking Neural Networks / Dequino, Alberto; Carpegna, Alessio; Nadalini, Davide; Savino, Alessandro; Benini, Luca; Di Carlo, Stefano; Conti, Francesco. - ELETTRONICO. - 1:(2024), pp. 240-245. (Intervento presentato al convegno 2024 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) tenutosi a Knoxville, TN (USA) nel 01-03 July 2024) [10.1109/isvlsi61997.2024.00052].

Compressed Latent Replays for Lightweight Continual Learning on Spiking Neural Networks

Dequino, Alberto;Carpegna, Alessio;Nadalini, Davide;Savino, Alessandro;Di Carlo, Stefano;
2024

Abstract

Rehearsal-based Continual Learning (CL) has been intensely investigated in Deep Neural Networks (DNNs). However, its application in Spiking Neural Networks (SNNs) has not been explored in depth. In this paper we introduce the first memory-efficient implementation of Latent Replay (LR)-based CL for SNNs, designed to seamlessly integrate with resource-constrained devices. LRs combine new samples with latent representations of previously learned data, to mitigate forgetting. Experiments on the Heidelberg SHD dataset with Sample and Class-Incremental tasks reach a Top-1 accuracy of 92.5% and 92%, respectively, without forgetting the previously learned information. Furthermore, we minimize the LRs' requirements by applying a time-domain compression, reducing by two orders of magnitude their memory requirement, with respect to a naïve rehearsal setup, with a maximum accuracy drop of 4%. On a Multi-Class-Incremental task, our SNN learns 10 new classes from an initial set of 10, reaching a Top-1 accuracy of 78.4% on the full test set 1 1 To encourage research in this field, we release the code related to our experiments as open-source: https://github.com/Dequino/Spiking-Compressed-Continual-Learning.
2024
979-8-3503-5411-9
File in questo prodotto:
File Dimensione Formato  
Compressed_Latent_Replays_for_Lightweight_Continual_Learning_on_Spiking_Neural_Networks.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 894.29 kB
Formato Adobe PDF
894.29 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Continual_Spiking_Prosciutti_Revival__ISVLSI_.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 2.11 MB
Formato Adobe PDF
2.11 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2992906