Deep neural networks (DNNs) are increasingly used in critical applications from healthcare to autonomous driving. However, their predictions were shown to degrade in the presence of transient hardware faults, leading to potentially catastrophic and unpredictable errors. Consequently, several techniques have been proposed to increase the fault tolerance of DNNs by modifying network structures and/or training procedures, thereby reducing the need for costly hardware redundancy. There are, however, design or training choices whose impact on fault propagation has been overlooked in the literature. In particular, self-supervised learning (SSL), as a pre-training technique, was shown to improve the robustness of the learned features, resulting in better performance in downstream tasks. This study investigates the fault tolerance of several SSL techniques on image classification benchmarks, including several related to Earth Observation. Experimental results suggests that SSL pretraining, alone or in combination with fault mitigation techniques, generally improves DNNs' fault tolerance, although the performance gap vary among datasets and SSL techniques.

On the Fault Tolerance of Self-Supervised Training in Convolutional Neural Networks / Milazzo, Rosario; De Marco, Vincenzo; De Sio, Corrado; Fosson, Sophie; Morra, Lia; Sterpone, Luca. - ELETTRONICO. - (2024), pp. 110-115. (Intervento presentato al convegno 27th International Symposium on Design and Diagnostics of Electronic Circuits and Systems tenutosi a Kielce (Polonia) nel 3-5 April 2024) [10.1109/DDECS60919.2024.10508923].

On the Fault Tolerance of Self-Supervised Training in Convolutional Neural Networks

Milazzo, Rosario;De Marco, Vincenzo;De Sio, Corrado;Fosson, Sophie;Morra, Lia;Sterpone, Luca
2024

Abstract

Deep neural networks (DNNs) are increasingly used in critical applications from healthcare to autonomous driving. However, their predictions were shown to degrade in the presence of transient hardware faults, leading to potentially catastrophic and unpredictable errors. Consequently, several techniques have been proposed to increase the fault tolerance of DNNs by modifying network structures and/or training procedures, thereby reducing the need for costly hardware redundancy. There are, however, design or training choices whose impact on fault propagation has been overlooked in the literature. In particular, self-supervised learning (SSL), as a pre-training technique, was shown to improve the robustness of the learned features, resulting in better performance in downstream tasks. This study investigates the fault tolerance of several SSL techniques on image classification benchmarks, including several related to Earth Observation. Experimental results suggests that SSL pretraining, alone or in combination with fault mitigation techniques, generally improves DNNs' fault tolerance, although the performance gap vary among datasets and SSL techniques.
2024
979-8-3503-5934-3
File in questo prodotto:
File Dimensione Formato  
2024050425.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 228.32 kB
Formato Adobe PDF
228.32 kB Adobe PDF Visualizza/Apri
On_the_Fault_Tolerance_of_Self-Supervised_Training_in_Convolutional_Neural_Networks.pdf

accesso riservato

Descrizione: Versione finale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 285.42 kB
Formato Adobe PDF
285.42 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2986869