In the effort to learn from extensive collections of distributed data, federated learning has emerged as a promising approach for preserving privacy by using a gradient-sharing mechanism instead of exchanging raw data. However, recent studies show that private training data can be leaked through many gradient attacks. While previous analytical-based attacks have successfully reconstructed input data from fully connected layers, their effectiveness diminishes when applied to convolutional layers. This paper introduces an advanced data leakage method to efficiently exploit convolutional layers' gradients. We present a surprising finding: even with non-fully invertible activation functions, such as ReLU, we can analytically reconstruct training samples from the gradients. To the best of our knowledge, this is the first analytical approach that successfully reconstructs convolutional layer inputs directly from the gradients, bypassing the need to reconstruct layers' outputs. Prior research has mainly concentrated on the weight constraints of convolution layers, overlooking the significance of gradient constraints. Our findings demonstrate that existing analytical methods used to estimate the risk of gradient attacks lack accuracy. In some layers, attacks can be launched with less than 5% of the reported constraints.
R-CONV: An Analytical Approach for Efficient Data Reconstruction via Convolutional Gradients / Eltaras, Tamer Ahmed; Malluhi, Qutaibah; Savino, Alessandro; Di Carlo, Stefano; Qayyum, Adnan. - ELETTRONICO. - 15440:(2024), pp. 271-285. (Intervento presentato al convegno 25th International Conference on Web Information Systems Engineering, WISE 2024 tenutosi a Doha (QAT) nel December 2–5, 2024) [10.1007/978-981-96-0576-7_21].
R-CONV: An Analytical Approach for Efficient Data Reconstruction via Convolutional Gradients
Eltaras, Tamer Ahmed;Savino, Alessandro;Di Carlo, Stefano;
2024
Abstract
In the effort to learn from extensive collections of distributed data, federated learning has emerged as a promising approach for preserving privacy by using a gradient-sharing mechanism instead of exchanging raw data. However, recent studies show that private training data can be leaked through many gradient attacks. While previous analytical-based attacks have successfully reconstructed input data from fully connected layers, their effectiveness diminishes when applied to convolutional layers. This paper introduces an advanced data leakage method to efficiently exploit convolutional layers' gradients. We present a surprising finding: even with non-fully invertible activation functions, such as ReLU, we can analytically reconstruct training samples from the gradients. To the best of our knowledge, this is the first analytical approach that successfully reconstructs convolutional layer inputs directly from the gradients, bypassing the need to reconstruct layers' outputs. Prior research has mainly concentrated on the weight constraints of convolution layers, overlooking the significance of gradient constraints. Our findings demonstrate that existing analytical methods used to estimate the risk of gradient attacks lack accuracy. In some layers, attacks can be launched with less than 5% of the reported constraints.File | Dimensione | Formato | |
---|---|---|---|
R_CONV__An_Analytical_Approach_for_Efficient_Data_Reconstruction_via_Convolutional_Gradients.pdf
embargo fino al 27/11/2025
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Pubblico - Tutti i diritti riservati
Dimensione
2.79 MB
Formato
Adobe PDF
|
2.79 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
978-981-96-0576-7-paper.pdf
accesso riservato
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
2.65 MB
Formato
Adobe PDF
|
2.65 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2995382