Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent). In this work, we highlight how fog- and IoT-based scenarios often require com- bining both approaches, and we present a framework for flexible parallel learning (FPL), achieving both data and model paral- lelism. Further, we investigate how different ways of distributing and parallelizing learning tasks across the participating nodes result in different computation, communication, and energy costs. Our experiments, carried out using state-of-the-art deep-network architectures and large-scale datasets, confirm that FPL allows for an excellent trade-off among computational (hence energy) cost, communication overhead, and learning performance.

Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost / Malandrino, Francesco; Chiasserini, Carla Fabiana. - ELETTRONICO. - (2022). (Intervento presentato al convegno IEEE PerCom Workshops - PeRConAI 2022 tenutosi a Pisa (Italy) nel 21-25 March 2022) [10.1109/PerComWorkshops53856.2022.9767275].

Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost

Francesco Malandrino;Carla Fabiana Chiasserini
2022

Abstract

Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent). In this work, we highlight how fog- and IoT-based scenarios often require com- bining both approaches, and we present a framework for flexible parallel learning (FPL), achieving both data and model paral- lelism. Further, we investigate how different ways of distributing and parallelizing learning tasks across the participating nodes result in different computation, communication, and energy costs. Our experiments, carried out using state-of-the-art deep-network architectures and large-scale datasets, confirm that FPL allows for an excellent trade-off among computational (hence energy) cost, communication overhead, and learning performance.
2022
978-1-6654-1647-4
File in questo prodotto:
File Dimensione Formato  
percom_v0_embedded.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 533.6 kB
Formato Adobe PDF
533.6 kB Adobe PDF Visualizza/Apri
Flexible_Parallel_Learning_in_Edge_Scenarios_Communication_Computational_and_Energy_Cost.pdf

non disponibili

Descrizione: Articolo principale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.22 MB
Formato Adobe PDF
1.22 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2950174