Under the federated learning paradigm, a set of nodes can cooperatively train a machine learning model with the help of a centralized server. Such a server is also tasked with assigning a weight to the information received from each node, and often also to drop too-slow nodes from the learning process. Both decisions have major impact on the resulting learning performance, and can interfere with each other in counter-in-tuitive ways. In this article, we focus on edge networking scenarios and investigate existing and novel approaches to such model-weighting and node-dropping decisions. Leveraging a set of realworld experiments, we find that popular, straightforward decision making approaches may yield poor performance, and that considering the quality of data in addition to its quantity can substantially improve learning.
Federated Learning at the Network Edge: When Not All Nodes are Created Equal / Malandrino, Francesco; Chiasserini, Carla Fabiana. - In: IEEE COMMUNICATIONS MAGAZINE. - ISSN 0163-6804. - STAMPA. - 59:7(2021), pp. 68-73.
Titolo: | Federated Learning at the Network Edge: When Not All Nodes are Created Equal | |
Autori: | ||
Data di pubblicazione: | 2021 | |
Rivista: | ||
Appare nelle tipologie: | 1.1 Articolo in rivista |
File in questo prodotto:
File | Descrizione | Tipologia | Licenza | |
---|---|---|---|---|
cameraready.pdf | Articolo principale | 2. Post-print / Author's Accepted Manuscript | PUBBLICO - Tutti i diritti riservati | Visibile a tuttiVisualizza/Apri |
Chiasserini-Federated.pdf | 2a Post-print versione editoriale / Version of Record | Non Pubblico - Accesso privato/ristretto | Administrator Richiedi una copia |
http://hdl.handle.net/11583/2869655