Federated Learning (FL) enables distributed clients to train a global classification model collaboratively while preserving data privacy. A major challenge in FL is ensuring efficient training with limited computing and communication resources, especially when clients’ datasets contain samples from a restricted subset of target classes, a problem known as extreme label skew. Under such a condition, model updates from clients are biased toward their local data distributions, resulting in slow convergence and increased energy consumption due to the need for additional training rounds. This paper introduces FL with Gradient-Aware Participation (FedGAP), a novel strategy aimed at reducing energy consumption while preserving model accuracy even with extreme label skew. FedGAP dynamically adjusts the cohort size, i.e., the number of participating clients per training round, based on the evolution of the global model’s pseudo-gradient. By detecting stagnant phases where progress toward convergence stalls, FedGAP increases the cohort size to escape suboptimal regions and accelerate learning, thereby minimizing the waste of resources. Experiments on CIFAR-10 and CIFAR-100 demonstrate that FedGAP achieves up to 2.74× greater energy efficiency compared to state-of-the-art methods without compromising accuracy.

Gradient-Aware Participation for Energy Reduction in Federated Learning with Extreme Label Skew / Malan, Erich; Peluso, Valentino; Calimera, Andrea; Macii, Enrico. - (2025). ( International Joint Conference on Neural Networks (IJCNN) Rome (ITA) June 30-July 5, 2025) [10.1109/IJCNN64981.2025.11227886].

Gradient-Aware Participation for Energy Reduction in Federated Learning with Extreme Label Skew

Malan, Erich;Peluso, Valentino;Calimera, Andrea;Macii, Enrico
2025

Abstract

Federated Learning (FL) enables distributed clients to train a global classification model collaboratively while preserving data privacy. A major challenge in FL is ensuring efficient training with limited computing and communication resources, especially when clients’ datasets contain samples from a restricted subset of target classes, a problem known as extreme label skew. Under such a condition, model updates from clients are biased toward their local data distributions, resulting in slow convergence and increased energy consumption due to the need for additional training rounds. This paper introduces FL with Gradient-Aware Participation (FedGAP), a novel strategy aimed at reducing energy consumption while preserving model accuracy even with extreme label skew. FedGAP dynamically adjusts the cohort size, i.e., the number of participating clients per training round, based on the evolution of the global model’s pseudo-gradient. By detecting stagnant phases where progress toward convergence stalls, FedGAP increases the cohort size to escape suboptimal regions and accelerate learning, thereby minimizing the waste of resources. Experiments on CIFAR-10 and CIFAR-100 demonstrate that FedGAP achieves up to 2.74× greater energy efficiency compared to state-of-the-art methods without compromising accuracy.
2025
979-8-3315-1042-8
File in questo prodotto:
File Dimensione Formato  
IJCNN_2025___Camera_Ready___Deadline_1_Maggio.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 396.11 kB
Formato Adobe PDF
396.11 kB Adobe PDF Visualizza/Apri
Gradient-Aware_Participation_for_Energy_Reduction_in_Federated_Learning_with_Extreme_Label_Skew.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.1 MB
Formato Adobe PDF
1.1 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3003445