This chapter presents a Gaussian Class-Conditional (GCC) training strategy for deep neural networks. The approach is based on a novel loss that maps the input data onto Gaussian target distributions in the latent space, where the parameters of the target distributions can be optimized for the specific task. For multiclass classification, the mean values of the learned distributions are centered on the vertices of a simplex such that each class is at the same distance from every other class. For metric learning, the distances between similar and dissimilar instance pairs are mapped on distributions with well-separated means. The proposed strategy has many advantages compared to conventional training. First, the optimal decision surface in the latent space is always a hyperplane, yielding a simple and interpretable decision rule. Second, the regularization of the latent space enforces high inter-class separation and low intra-class separation, minimizing the presence of short paths toward neighboring decision regions. The GCC training strategy is applied to two different multimedia problems. In image classification, GCC training provides both improved accuracy and robustness against adversarial perturbations, outperforming models trained with conventional cross-entropy loss and adversarial training. In biometric verification, GCC training yields lower error rates than other state-of-the-art approaches even on challenging and unconstrained datasets.

Gaussian Class-Conditional Training for Secure and Robust Deep Neural Networks / Bianchi, Tiziano; Migliorati, Andrea; Magli, Enrico (CNIT TECHNICAL REPORT ..). - In: SIGNAL PROCESSING AND LEARNING FOR NEXT GENERATION MULTIMEDIA / Bernardini R., Marcenaro L., Rinaldo R., Zanuttigh P.. - STAMPA. - [s.l] : Texmat, 2024. - ISBN 9788894982800. - pp. 55-77 [10.57620/CNIT-Report_13]

Gaussian Class-Conditional Training for Secure and Robust Deep Neural Networks

Tiziano Bianchi;Andrea Migliorati;Enrico Magli
2024

Abstract

This chapter presents a Gaussian Class-Conditional (GCC) training strategy for deep neural networks. The approach is based on a novel loss that maps the input data onto Gaussian target distributions in the latent space, where the parameters of the target distributions can be optimized for the specific task. For multiclass classification, the mean values of the learned distributions are centered on the vertices of a simplex such that each class is at the same distance from every other class. For metric learning, the distances between similar and dissimilar instance pairs are mapped on distributions with well-separated means. The proposed strategy has many advantages compared to conventional training. First, the optimal decision surface in the latent space is always a hyperplane, yielding a simple and interpretable decision rule. Second, the regularization of the latent space enforces high inter-class separation and low intra-class separation, minimizing the presence of short paths toward neighboring decision regions. The GCC training strategy is applied to two different multimedia problems. In image classification, GCC training provides both improved accuracy and robustness against adversarial perturbations, outperforming models trained with conventional cross-entropy loss and adversarial training. In biometric verification, GCC training yields lower error rates than other state-of-the-art approaches even on challenging and unconstrained datasets.
2024
9788894982800
SIGNAL PROCESSING AND LEARNING FOR NEXT GENERATION MULTIMEDIA
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2992022
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo