Many machine learning (ML) techniques suf- fer from the drawback that their output (e.g., a classifi- cation decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FL’s cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy.

Towards Node Liability in Federated Learning: Computational Cost and Network Overhead / Malandrino, Francesco; Chiasserini, Carla Fabiana. - In: IEEE COMMUNICATIONS MAGAZINE. - ISSN 0163-6804. - STAMPA. - (2021), pp. 72-77. [10.1109/MCOM.011.2100231]

Towards Node Liability in Federated Learning: Computational Cost and Network Overhead

Francesco Malandrino;Carla Fabiana Chiasserini
2021

Abstract

Many machine learning (ML) techniques suf- fer from the drawback that their output (e.g., a classifi- cation decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FL’s cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy.
File in questo prodotto:
File Dimensione Formato  
commag_fl_accountable_final_v2.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 354.61 kB
Formato Adobe PDF
354.61 kB Adobe PDF Visualizza/Apri
Toward_Node_Liability_in_Federated_Learning_Computational_Cost_and_Network_Overhead.pdf

accesso riservato

Descrizione: Articolo principale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 736.62 kB
Formato Adobe PDF
736.62 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2911072