In recent decades, applications in environmental sustainability, education, and housekeeping have become increasingly distributed and sophisticated, leveraging a wide range of devices to perform complex tasks. While a large number of agents can reduce computation time, managing these distributed systems presents significant challenges due to resource constraints such as power consumption and storage. To address this, the literature has explored various model compression techniques, such as pruning, to optimize performance in distributed environments. In this paper, we propose DFL-Trim, a solution for trimming models in Decentralized Federated Learning (FL) that meets network constraints while maintaining satisfactory performance. We demonstrate how pruning can be implemented in decentralized settings, analyze its effect on bandwidth usage, and discuss the trade-offs between compression and model accuracy.

Optimizing Model Pruning in Decentralized Learning Networks with DFL-Trim / Pinto, Andrea; Masci, Alessandro; Sacco, Alessio; Marchetto, Guido; Esposito, Flavio. - ELETTRONICO. - (2025), pp. 189-193. ( 11th IEEE International Conference on Network Softwarization, NetSoft 2025 Budapest (HUN) 23-27 June 2025) [10.1109/netsoft64993.2025.11080578].

Optimizing Model Pruning in Decentralized Learning Networks with DFL-Trim

Sacco, Alessio;Marchetto, Guido;
2025

Abstract

In recent decades, applications in environmental sustainability, education, and housekeeping have become increasingly distributed and sophisticated, leveraging a wide range of devices to perform complex tasks. While a large number of agents can reduce computation time, managing these distributed systems presents significant challenges due to resource constraints such as power consumption and storage. To address this, the literature has explored various model compression techniques, such as pruning, to optimize performance in distributed environments. In this paper, we propose DFL-Trim, a solution for trimming models in Decentralized Federated Learning (FL) that meets network constraints while maintaining satisfactory performance. We demonstrate how pruning can be implemented in decentralized settings, analyze its effect on bandwidth usage, and discuss the trade-offs between compression and model accuracy.
2025
979-8-3315-4345-7
File in questo prodotto:
File Dimensione Formato  
Optimizing_Model_Pruning_in_Decentralized_Learning_Networks_with_DFL-Trim.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 320.94 kB
Formato Adobe PDF
320.94 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
DFL_Trim___NetSoft_2025__short_.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 277.78 kB
Formato Adobe PDF
277.78 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3007591