Convolutional Neural Networks (CNNs) have become ubiquitous in diverse applications, including safety-critical domains such as autonomous driving, where ensuring reliability is crucial. CNNs reliability can be jeopardized by the occurrence of hardware faults during the inference, leading to severe consequences. In recent years, gradient regularization has garnered attention as a technique able to improve generalization and robustness to Gaussian noise injected into the parameters of neural networks, but no study has been done considering its fault-tolerance effect. This paper analyzes the influence of gradient regularization on CNNs reliability for classification tasks in the presence of random hardware faults, exploring impacts on the network’s performance and robustness. Our experiments involved simulating permanent stuck-at faults through statistical fault injection and assessing the reliability of CNNs trained with and without gradient regularization. Experimental results point out that regularization reduces the masking ability of neural networks, paving the way for efficient in-field fault detection techniques that aim at unveiling permanent faults. Specifically, it systematically reduces the percentage of masked faults up to 15% while preserving high prediction accuracy.

Investigating on Gradient Regularization for Testing Neural Networks / Bellarmino, Nicolo; Bosio, Alberto; Cantoro, Riccardo; Ruospo, Annachiara; Sanchez, Ernesto; Squillero, Giovanni. - ELETTRONICO. - (2024). (Intervento presentato al convegno 10th International Conference on machine Learning, Optimization and Data science (LOD 2024) tenutosi a Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto), Tuscany, Italy nel September 22 – 25, 2024).

Investigating on Gradient Regularization for Testing Neural Networks

Bellarmino, Nicolo;Bosio, Alberto;Cantoro, Riccardo;Ruospo, Annachiara;Sanchez, Ernesto;Squillero, Giovanni
2024

Abstract

Convolutional Neural Networks (CNNs) have become ubiquitous in diverse applications, including safety-critical domains such as autonomous driving, where ensuring reliability is crucial. CNNs reliability can be jeopardized by the occurrence of hardware faults during the inference, leading to severe consequences. In recent years, gradient regularization has garnered attention as a technique able to improve generalization and robustness to Gaussian noise injected into the parameters of neural networks, but no study has been done considering its fault-tolerance effect. This paper analyzes the influence of gradient regularization on CNNs reliability for classification tasks in the presence of random hardware faults, exploring impacts on the network’s performance and robustness. Our experiments involved simulating permanent stuck-at faults through statistical fault injection and assessing the reliability of CNNs trained with and without gradient regularization. Experimental results point out that regularization reduces the masking ability of neural networks, paving the way for efficient in-field fault detection techniques that aim at unveiling permanent faults. Specifically, it systematically reduces the percentage of masked faults up to 15% while preserving high prediction accuracy.
2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2991465
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo