The rapid growth of mobile devices and machine learning (ML)-based applications is driving a surge in data traffic. Even when inference tasks are considered, a huge amount of data needs to be transferred into the network, e.g., large Deep Neural Network (DNN) models retrieved for on-device inference, streams of input data sent from the device to the edge if the task is offloaded. To address the resulting potential network congestion, we formulate a novel optimization problem aimed at deciding where to execute streams of DNN inference tasks from multiple devices across the mobile device-edge continuum in order to minimize the amount of exchanged data traffic, while satisfying accuracy, latency, and battery constraints. The formulated problem also selects the model variant (in terms of size and accuracy) that best suits the placement decision (device, edge). Results have been collected under a wide variety of different settings, which showcase the validity of our proposal and its supremacy over the considered benchmark schemes, with gains in terms of saved bandwidth up to 98%.

Traffic-aware DNN Inference Task Offloading in the Mobile Device-Edge Continuum / Chukhno, Olga; Singh, Gurtaj; Campolo, Claudia; Chiasserini, Carla Fabiana; Molinaro, Antonella. - (2025). (Intervento presentato al convegno IEEE INFOCOM 2025 ICCN tenutosi a London (UK) nel 19 May 2025).

Traffic-aware DNN Inference Task Offloading in the Mobile Device-Edge Continuum

Carla Fabiana Chiasserini;
2025

Abstract

The rapid growth of mobile devices and machine learning (ML)-based applications is driving a surge in data traffic. Even when inference tasks are considered, a huge amount of data needs to be transferred into the network, e.g., large Deep Neural Network (DNN) models retrieved for on-device inference, streams of input data sent from the device to the edge if the task is offloaded. To address the resulting potential network congestion, we formulate a novel optimization problem aimed at deciding where to execute streams of DNN inference tasks from multiple devices across the mobile device-edge continuum in order to minimize the amount of exchanged data traffic, while satisfying accuracy, latency, and battery constraints. The formulated problem also selects the model variant (in terms of size and accuracy) that best suits the placement decision (device, edge). Results have been collected under a wide variety of different settings, which showcase the validity of our proposal and its supremacy over the considered benchmark schemes, with gains in terms of saved bandwidth up to 98%.
File in questo prodotto:
File Dimensione Formato  
1571107312 paper.pdf

accesso aperto

Tipologia: 1. Preprint / submitted version [pre- review]
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 378.67 kB
Formato Adobe PDF
378.67 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2997387