The unceasing development of Artificial Intelligence (AI) and Machine Learning (ML) techniques is growing with privacy problems related to the training data. A relatively recent approach to partially cope with such concerns is Federated Learning (FL), a technique in which only the parameters of the trained neural network models are transferred rather than data. Despite the benefits that FL may provide, such an approach can lead to synchronization issues (especially when applied in the context of numerous IoT devices), the network and the server may turn into bottlenecks, and the load may become unsustainable for some nodes. To solve this issue and reduce the traffic on the network, in this paper, we propose P4FL , a novel FL architecture that uses the paradigm of network programmability to program P4 switches to compute intermediate aggregations. In particular, we defined a custom in-band protocol based on MPLS to carry the model parameters and adapted the P4 switch behavior to aggregate model gradients. We then evaluated P4FL in Mininet and verified that using network nodes for in-network model caching and gradient aggregating has two advantages: first, it alleviates the bottleneck effect of the central FL server; second, it further accelerates the entire training progress.

P4FL: An Architecture for Federating Learning with In-Network Processing / Sacco, Alessio; Angi, Antonino; Marchetto, Guido; Esposito, Flavio. - In: IEEE ACCESS. - ISSN 2169-3536. - ELETTRONICO. - 11:(2023), pp. 103650-103658. [10.1109/ACCESS.2023.3318109]

P4FL: An Architecture for Federating Learning with In-Network Processing

Alessio Sacco;Antonino Angi;Guido Marchetto;
2023

Abstract

The unceasing development of Artificial Intelligence (AI) and Machine Learning (ML) techniques is growing with privacy problems related to the training data. A relatively recent approach to partially cope with such concerns is Federated Learning (FL), a technique in which only the parameters of the trained neural network models are transferred rather than data. Despite the benefits that FL may provide, such an approach can lead to synchronization issues (especially when applied in the context of numerous IoT devices), the network and the server may turn into bottlenecks, and the load may become unsustainable for some nodes. To solve this issue and reduce the traffic on the network, in this paper, we propose P4FL , a novel FL architecture that uses the paradigm of network programmability to program P4 switches to compute intermediate aggregations. In particular, we defined a custom in-band protocol based on MPLS to carry the model parameters and adapted the P4 switch behavior to aggregate model gradients. We then evaluated P4FL in Mininet and verified that using network nodes for in-network model caching and gradient aggregating has two advantages: first, it alleviates the bottleneck effect of the central FL server; second, it further accelerates the entire training progress.
2023
File in questo prodotto:
File Dimensione Formato  
P4FL_An_Architecture_for_Federating_Learning_With_In-Network_Processing.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 1 MB
Formato Adobe PDF
1 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2982469