Deep neural networks (DNNs) are increasingly used in critical applications from healthcare to autonomous driving. However, their predictions were shown to degrade in the presence of transient hardware faults, leading to potentially catastrophic and unpredictable errors. Consequently, several techniques have been proposed to increase the fault tolerance of DNNs by modifying network structures and/or training procedures, thereby reducing the need for costly hardware redundancy. There are, however, design or training choices whose impact on fault propagation has been overlooked in the literature. In particular, self-supervised learning (SSL), as a pre-training technique, was shown to improve the robustness of the learned features, resulting in better performance in downstream tasks. This study investigates the fault tolerance of several SSL techniques on image classification benchmarks, including several related to Earth Observation. Experimental results suggests that SSL pretraining, alone or in combination with fault mitigation techniques, generally improves DNNs' fault tolerance, although the performance gap vary among datasets and SSL techniques.

On the Fault Tolerance of Self-Supervised Training in Convolutional Neural Networks / Milazzo, Rosario; De Marco, Vincenzo; De Sio, Corrado; Fosson, Sophie; Morra, Lia; Sterpone, Luca. - (In corso di stampa). (Intervento presentato al convegno 27th International Symposium on Design and Diagnostics of Electronic Circuits and Systems tenutosi a Kielce (Polonia) nel 3-5 April 2024).

On the Fault Tolerance of Self-Supervised Training in Convolutional Neural Networks

Milazzo, Rosario;De Marco, Vincenzo;De Sio, Corrado;Fosson, Sophie;Morra, Lia;Sterpone, Luca
In corso di stampa

Abstract

Deep neural networks (DNNs) are increasingly used in critical applications from healthcare to autonomous driving. However, their predictions were shown to degrade in the presence of transient hardware faults, leading to potentially catastrophic and unpredictable errors. Consequently, several techniques have been proposed to increase the fault tolerance of DNNs by modifying network structures and/or training procedures, thereby reducing the need for costly hardware redundancy. There are, however, design or training choices whose impact on fault propagation has been overlooked in the literature. In particular, self-supervised learning (SSL), as a pre-training technique, was shown to improve the robustness of the learned features, resulting in better performance in downstream tasks. This study investigates the fault tolerance of several SSL techniques on image classification benchmarks, including several related to Earth Observation. Experimental results suggests that SSL pretraining, alone or in combination with fault mitigation techniques, generally improves DNNs' fault tolerance, although the performance gap vary among datasets and SSL techniques.
In corso di stampa
File in questo prodotto:
File Dimensione Formato  
2024050425.pdf

non disponibili

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 228.32 kB
Formato Adobe PDF
228.32 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2986869