Deep neural networks (DNNs) are increasingly being applied in critical domains such as healthcare and autonomous driving. However, their predictive capabilities can degrade in the presence of transient hardware faults, which can lead to potentially catastrophic and unpredictable errors. Consequently, various techniques have been proposed to enhance DNN fault tolerance by modifying the network structure or training procedure, thereby reducing the need for costly hardware redundancy. However, there are design or training choices whose impact on fault propagation has been overlooked in the literature. Specifically, self-supervised learning (SSL) as a pre-training technique has been shown to enhance the robustness of learned features, resulting in improved performance on downstream tasks. This study investigates the error tolerance of different DNN and SSL techniques in image classification and segmentation tasks, including those relevant to Earth Observation. Experimental results suggest that SSL pretraining, whether used alone or in combination with error mitigation techniques, generally enhances DNN fault tolerance. We complement these findings with an in-depth analysis of the fault tolerance of quantized networks. In this context, the use of standard SSL techniques leads to a decrease in accuracy. However, this issue can be partially addressed by employing methods that, during the pretraining phase, incorporate the quantization error into the loss function.
Self-supervised pretraining and quantization for fault tolerant neural networks: friend or foe? / Milazzo, Rosario; Fosson, Sophie M.; Morra, Lia; Sterpone, Luca. - In: IEEE ACCESS. - ISSN 2169-3536. - 13:(2025), pp. 75546-75562. [10.1109/access.2025.3564834]
Self-supervised pretraining and quantization for fault tolerant neural networks: friend or foe?
Milazzo, Rosario;Fosson, Sophie M.;Morra, Lia;Sterpone, Luca
2025
Abstract
Deep neural networks (DNNs) are increasingly being applied in critical domains such as healthcare and autonomous driving. However, their predictive capabilities can degrade in the presence of transient hardware faults, which can lead to potentially catastrophic and unpredictable errors. Consequently, various techniques have been proposed to enhance DNN fault tolerance by modifying the network structure or training procedure, thereby reducing the need for costly hardware redundancy. However, there are design or training choices whose impact on fault propagation has been overlooked in the literature. Specifically, self-supervised learning (SSL) as a pre-training technique has been shown to enhance the robustness of learned features, resulting in improved performance on downstream tasks. This study investigates the error tolerance of different DNN and SSL techniques in image classification and segmentation tasks, including those relevant to Earth Observation. Experimental results suggest that SSL pretraining, whether used alone or in combination with error mitigation techniques, generally enhances DNN fault tolerance. We complement these findings with an in-depth analysis of the fault tolerance of quantized networks. In this context, the use of standard SSL techniques leads to a decrease in accuracy. However, this issue can be partially addressed by employing methods that, during the pretraining phase, incorporate the quantization error into the loss function.File | Dimensione | Formato | |
---|---|---|---|
Self-Supervised_Pretraining_and_Quantization_for_Fault_Tolerant_Neural_Networks_Friend_or_Foe.pdf
accesso aperto
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Creative commons
Dimensione
5.43 MB
Formato
Adobe PDF
|
5.43 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2999699