Machine learning (ML) can be often distributed, owing to the need to harness more resources and/or to preserve privacy. Accordingly, distributed learning has received significant attention from the literature; however, most works focus on the expected learning quality (e.g., loss) attained and do not consider the distribution thereof. It follows that ML models are not dependable, and may fall short of the required performance in many real-world cases. In this work, we tackle this challenge and propose DepL, a framework attaining dependable learning orchestration. DepL efficiently makes joint, near-optimal decisions concerning (i) which data to use for learning, (ii) the ML models to use – chosen within a set of full-size models and compressed versions thereof – and when to switch from one model to another, and (iii) the clusters of physical nodes to use for the learning. DepL improves over previous works by guaranteeing that the learning quality target (e.g., a minimum loss) is achieved with a target probability, while minimizing the learning (e.g., energy) cost. DepL has provably low polynomial computational com- plexity and a constant competitive ratio. Further, experimental results using the CIFAR-10 and GTSRB datasets show that it consistently matches the optimum and outperforms state-of-the- art approaches (30% faster learning and 40–80% lower cost).
Achieving Machine Learning Dependability Through Model Switching and Compression / Malandrino, Francesco; Di Giacomo, Giuseppe; Levorato, Marco; Chiasserini, Carla Fabiana. - In: IEEE TRANSACTIONS ON MOBILE COMPUTING. - ISSN 1536-1233. - (2025).
Achieving Machine Learning Dependability Through Model Switching and Compression
Giuseppe Di Giacomo;Carla Fabiana Chiasserini
2025
Abstract
Machine learning (ML) can be often distributed, owing to the need to harness more resources and/or to preserve privacy. Accordingly, distributed learning has received significant attention from the literature; however, most works focus on the expected learning quality (e.g., loss) attained and do not consider the distribution thereof. It follows that ML models are not dependable, and may fall short of the required performance in many real-world cases. In this work, we tackle this challenge and propose DepL, a framework attaining dependable learning orchestration. DepL efficiently makes joint, near-optimal decisions concerning (i) which data to use for learning, (ii) the ML models to use – chosen within a set of full-size models and compressed versions thereof – and when to switch from one model to another, and (iii) the clusters of physical nodes to use for the learning. DepL improves over previous works by guaranteeing that the learning quality target (e.g., a minimum loss) is achieved with a target probability, while minimizing the learning (e.g., energy) cost. DepL has provably low polynomial computational com- plexity and a constant competitive ratio. Further, experimental results using the CIFAR-10 and GTSRB datasets show that it consistently matches the optimum and outperforms state-of-the- art approaches (30% faster learning and 40–80% lower cost).File | Dimensione | Formato | |
---|---|---|---|
Dependable_ML.pdf
accesso aperto
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Pubblico - Tutti i diritti riservati
Dimensione
2.3 MB
Formato
Adobe PDF
|
2.3 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3003642