As more deep learning algorithms enter safety-critical application domains, the importance of analyzing their resilience against hardware faults cannot be overstated. Most existing works focus on bit-flips in memory, fewer focus on compute errors, and almost none study the effect of hardware faults on adversarially trained convolutional neural networks (CNNs). In this work, we show that adversarially trained CNNs are more susceptible to failure due to hardware errors when compared to vanilla-trained models. We identify large differences in the quantization scaling factors of the CNNs which are resilient to hardware faults and those which are not. As adversarially trained CNNs learn robustness against input attack perturbations, their internal weight and activation distributions open a backdoor for injecting large magnitude hardware faults. We propose a simple weight decay remedy for adversarially trained models to maintain adversarial robustness and hardware resilience in the same CNN. We improve the fault resilience of an adversarially trained ResNet56 by 25% for large-scale bit-flip benchmarks on activation data while gaining slightly improved accuracy and adversarial robustness.

Mind the Scaling Factors: Resilience Analysis of Quantized Adversarially Robust CNNs / Fasfous, Nael; Frickenstein, Lukas; Neumeier, Michael; Vemparala, Manoj Rohit; Frickenstein, Alexander; Valpreda, Emanuele; Martina, Maurizio; Stechele, Walter. - ELETTRONICO. - (2022), pp. 706-711. (Intervento presentato al convegno 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE) tenutosi a Antwerp, Belgium nel 14-23 March 2022) [10.23919/DATE54114.2022.9774686].

Mind the Scaling Factors: Resilience Analysis of Quantized Adversarially Robust CNNs

Valpreda, Emanuele;Martina, Maurizio;
2022

Abstract

As more deep learning algorithms enter safety-critical application domains, the importance of analyzing their resilience against hardware faults cannot be overstated. Most existing works focus on bit-flips in memory, fewer focus on compute errors, and almost none study the effect of hardware faults on adversarially trained convolutional neural networks (CNNs). In this work, we show that adversarially trained CNNs are more susceptible to failure due to hardware errors when compared to vanilla-trained models. We identify large differences in the quantization scaling factors of the CNNs which are resilient to hardware faults and those which are not. As adversarially trained CNNs learn robustness against input attack perturbations, their internal weight and activation distributions open a backdoor for injecting large magnitude hardware faults. We propose a simple weight decay remedy for adversarially trained models to maintain adversarial robustness and hardware resilience in the same CNN. We improve the fault resilience of an adversarially trained ResNet56 by 25% for large-scale bit-flip benchmarks on activation data while gaining slightly improved accuracy and adversarial robustness.
2022
978-3-9819263-6-1
File in questo prodotto:
File Dimensione Formato  
2022_DATE___MindScalingFactors_Accepted_.pdf

accesso aperto

Descrizione: Versionne accettata pre-print
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 357.43 kB
Formato Adobe PDF
357.43 kB Adobe PDF Visualizza/Apri
Mind_the_Scaling_Factors_Resilience_Analysis_of_Quantized_Adversarially_Robust_CNNs.pdf

non disponibili

Descrizione: Versione finale pubblicata sui proceedings
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 203.11 kB
Formato Adobe PDF
203.11 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2964392