Federated learning (FL) is a collaborative machine learning paradigm where network-edge clients train a global model under the orchestration of a central server. Unlike traditional distributed learning, each participating client keeps its data locally, ensuring privacy protection by default. However, state-of-the-art FL implementations suffer from massive information exchange between clients and the server. This issue prevents the adoption in constrained environments, typical of the Internet of Things domain, where the communication bandwidth and the energy budget are severely limited. To achieve higher efficiency at scale, the future of FL calls for additional optimizations to reach high-quality learning capability with lower communication pressure. To address this challenge, we propose automatic layer freezing (ALF), an embedded mechanism that gradually drops a growing portion of the model out of the training and synchronization phases of the learning loop, reducing the volume of exchanged data with the central server. ALF monitors the evolution of model updates and identifies layers that have reached a stable representation, where further weight updates would have minimal impact on accuracy. By freezing these layers, ALF achieves substantial savings in communication bandwidth and energy consumption. The proposed implementation of the ALF mechanism is compatible with any FL strategy, requiring minimal effort and without interfering with existing optimizations. The extensive experiments conducted using a representative set of FL strategies applied to two image classification tasks show that ALF improves the communication efficiency of the baseline FL implementations, ensuring up to 83.91% of data volume savings with no or marginal losses of accuracy.

Automatic Layer Freezing for Communication Efficiency in Cross-Device Federated Learning / Malan, Erich; Peluso, Valentino; Calimera, Andrea; Macii, Enrico; Montuschi, Paolo. - In: IEEE INTERNET OF THINGS JOURNAL. - ISSN 2327-4662. - 11:4(2024), pp. 6072-6083. [10.1109/JIOT.2023.3309691]

Automatic Layer Freezing for Communication Efficiency in Cross-Device Federated Learning

Malan, Erich;Peluso, Valentino;Calimera, Andrea;Macii, Enrico;Montuschi, Paolo
2024

Abstract

Federated learning (FL) is a collaborative machine learning paradigm where network-edge clients train a global model under the orchestration of a central server. Unlike traditional distributed learning, each participating client keeps its data locally, ensuring privacy protection by default. However, state-of-the-art FL implementations suffer from massive information exchange between clients and the server. This issue prevents the adoption in constrained environments, typical of the Internet of Things domain, where the communication bandwidth and the energy budget are severely limited. To achieve higher efficiency at scale, the future of FL calls for additional optimizations to reach high-quality learning capability with lower communication pressure. To address this challenge, we propose automatic layer freezing (ALF), an embedded mechanism that gradually drops a growing portion of the model out of the training and synchronization phases of the learning loop, reducing the volume of exchanged data with the central server. ALF monitors the evolution of model updates and identifies layers that have reached a stable representation, where further weight updates would have minimal impact on accuracy. By freezing these layers, ALF achieves substantial savings in communication bandwidth and energy consumption. The proposed implementation of the ALF mechanism is compatible with any FL strategy, requiring minimal effort and without interfering with existing optimizations. The extensive experiments conducted using a representative set of FL strategies applied to two image classification tasks show that ALF improves the communication efficiency of the baseline FL implementations, ensuring up to 83.91% of data volume savings with no or marginal losses of accuracy.
File in questo prodotto:
File Dimensione Formato  
Automatic_Layer_Freezing_for_Communication_Efficiency_in_Cross-Device_Federated_Learning.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 1.23 MB
Formato Adobe PDF
1.23 MB Adobe PDF Visualizza/Apri
Automatic_Layer_Freezing_for_Communication_Efficiency_in_Cross-Device_Federated_Learning.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 2.03 MB
Formato Adobe PDF
2.03 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2981803