Machine learning (ML) can be often distributed, owing to the need to harness more resources and/or to preserve privacy. Accordingly, distributed learning has received significant attention from the literature; however, most works focus on the expected learning quality (e.g., loss) attained and do not consider the distribution thereof. It follows that ML models are not dependable, and may fall short of the required performance in many real-world cases. In this work, we tackle this challenge and propose DepL, a framework attaining dependable learning orchestration. DepL efficiently makes joint, near-optimal decisions concerning (i) which data to use for learning, (ii) the ML models to use – chosen within a set of full-size models and compressed versions thereof – and when to switch from one model to another, and (iii) the clusters of physical nodes to use for the learning. DepL improves over previous works by guaranteeing that the learning quality target (e.g., a minimum loss) is achieved with a target probability, while minimizing the learning (e.g., energy) cost. DepL has provably low polynomial computational com- plexity and a constant competitive ratio. Further, experimental results using the CIFAR-10 and GTSRB datasets show that it consistently matches the optimum and outperforms state-of-the- art approaches (30% faster learning and 40–80% lower cost).

Achieving Machine Learning Dependability Through Model Switching and Compression / Malandrino, Francesco; Di Giacomo, Giuseppe; Levorato, Marco; Chiasserini, Carla Fabiana. - In: IEEE TRANSACTIONS ON MOBILE COMPUTING. - ISSN 1536-1233. - 25(2026), pp. 3889-3904. [10.1109/TMC.2025.3619560]

Achieving Machine Learning Dependability Through Model Switching and Compression

Giuseppe Di Giacomo;Carla Fabiana Chiasserini
2026

Abstract

Machine learning (ML) can be often distributed, owing to the need to harness more resources and/or to preserve privacy. Accordingly, distributed learning has received significant attention from the literature; however, most works focus on the expected learning quality (e.g., loss) attained and do not consider the distribution thereof. It follows that ML models are not dependable, and may fall short of the required performance in many real-world cases. In this work, we tackle this challenge and propose DepL, a framework attaining dependable learning orchestration. DepL efficiently makes joint, near-optimal decisions concerning (i) which data to use for learning, (ii) the ML models to use – chosen within a set of full-size models and compressed versions thereof – and when to switch from one model to another, and (iii) the clusters of physical nodes to use for the learning. DepL improves over previous works by guaranteeing that the learning quality target (e.g., a minimum loss) is achieved with a target probability, while minimizing the learning (e.g., energy) cost. DepL has provably low polynomial computational com- plexity and a constant competitive ratio. Further, experimental results using the CIFAR-10 and GTSRB datasets show that it consistently matches the optimum and outperforms state-of-the- art approaches (30% faster learning and 40–80% lower cost).
File in questo prodotto:
File Dimensione Formato  
Dependable_ML.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 2.3 MB
Formato Adobe PDF
2.3 MB Adobe PDF Visualizza/Apri
Achieving_Machine_Learning_Dependability_Through_Model_Switching_and_Compression.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 2.69 MB
Formato Adobe PDF
2.69 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3003642