Federated Learning (FL) aims to learn a global model from distributed users while protecting their privacy. However, when data are distributed heterogeneously the learning process becomes noisy, unstable, and biased towards the last seen clients’ data, slowing down convergence. To address these issues and improve the robustness and generalization capabilities of the global model, we propose WIMA (Window-based Model Averaging). WIMA aggregates global models from different rounds using a window-based approach, effectively capturing knowledge from multiple users and reducing the bias from the last ones. By adopting a windowed view on the rounds, WIMA can be applied from the initial stages of training. Importantly, our method introduces no additional communication or client-side computation overhead. Our experiments demonstrate the robustness of WIMA against distribution shifts and bad client sampling, resulting in smoother and more stable learning trends. Additionally, WIMA can be easily integrated with state-of-the-art algorithms. We extensively evaluate our approach on standard FL benchmarks, such as CIFAR10/100 and FEMNIST, demonstrating its effectiveness.
Window-based Model Averaging Improves Generalization in Heterogeneous Federated Learning / Caldarola, Debora; Caputo, Barbara; Ciccone, Marco. - ELETTRONICO. - (2023), pp. 2255-2263. (Intervento presentato al convegno International Conference on Computer Vision Workshop 2023 tenutosi a Parigi (FR) nel 02-06 October 2023) [10.1109/ICCVW60793.2023.00240].
Window-based Model Averaging Improves Generalization in Heterogeneous Federated Learning
Caldarola, Debora;Caputo, Barbara;Ciccone, Marco
2023
Abstract
Federated Learning (FL) aims to learn a global model from distributed users while protecting their privacy. However, when data are distributed heterogeneously the learning process becomes noisy, unstable, and biased towards the last seen clients’ data, slowing down convergence. To address these issues and improve the robustness and generalization capabilities of the global model, we propose WIMA (Window-based Model Averaging). WIMA aggregates global models from different rounds using a window-based approach, effectively capturing knowledge from multiple users and reducing the bias from the last ones. By adopting a windowed view on the rounds, WIMA can be applied from the initial stages of training. Importantly, our method introduces no additional communication or client-side computation overhead. Our experiments demonstrate the robustness of WIMA against distribution shifts and bad client sampling, resulting in smoother and more stable learning trends. Additionally, WIMA can be easily integrated with state-of-the-art algorithms. We extensively evaluate our approach on standard FL benchmarks, such as CIFAR10/100 and FEMNIST, demonstrating its effectiveness.File | Dimensione | Formato | |
---|---|---|---|
Caldarola_Window-Based_Model_Averaging_Improves_Generalization_in_Heterogeneous_Federated_Learning_ICCVW_2023_paper.pdf
accesso aperto
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Pubblico - Tutti i diritti riservati
Dimensione
360.64 kB
Formato
Adobe PDF
|
360.64 kB | Adobe PDF | Visualizza/Apri |
Window-based_Model_Averaging_Improves_Generalization_in_Heterogeneous_Federated_Learning.pdf
accesso riservato
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
864.48 kB
Formato
Adobe PDF
|
864.48 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2981003