The growing demands of DNN-based inference at the mobile edge is driving the need for increasingly efficient execution. Such applications often require fast and high-quality outputs, which are hard to realize due to the limited computa- tional and communication capabilities at the edge. This paper tackles these issues focusing on a DNN for the execution of tasks that are homogeneous in nature but heterogeneous in their domains. The key idea is to start with a parent DNN of interconnected computational elements (atoms), and strategically form a collection of task-specific DNNs suitable for distributed deployment. Such task-specific DNNs may include common as well as uniquely used atoms of the parent DNN. Ultimately, the aim is that they be smaller in size – thus a better match for edge resources – and achieve low-cost inference. We solve the problem of determining the best collection of task-specific DNNs through an algorithmic framework named TIDE. Experimental results show that TIDE decreases inference cost and time by 90% and 80% (resp.) relatively to centralized approaches, and by over 60% and 70% (resp.) when compared to the best benchmark.

TIDE: Task-Driven DNN Training and Splitting for Efficient Inference at the Mobile Edge / Malandrino, Francesco; De Veciana, Gustavo; Chiasserini, Carla Fabiana. - (2026). ( IEEE ICMLCN 2026 Abu Dhabi (UAE) 30 Marzo - 2 Aprile 2026).

TIDE: Task-Driven DNN Training and Splitting for Efficient Inference at the Mobile Edge

Carla Fabiana Chiasserini
2026

Abstract

The growing demands of DNN-based inference at the mobile edge is driving the need for increasingly efficient execution. Such applications often require fast and high-quality outputs, which are hard to realize due to the limited computa- tional and communication capabilities at the edge. This paper tackles these issues focusing on a DNN for the execution of tasks that are homogeneous in nature but heterogeneous in their domains. The key idea is to start with a parent DNN of interconnected computational elements (atoms), and strategically form a collection of task-specific DNNs suitable for distributed deployment. Such task-specific DNNs may include common as well as uniquely used atoms of the parent DNN. Ultimately, the aim is that they be smaller in size – thus a better match for edge resources – and achieve low-cost inference. We solve the problem of determining the best collection of task-specific DNNs through an algorithmic framework named TIDE. Experimental results show that TIDE decreases inference cost and time by 90% and 80% (resp.) relatively to centralized approaches, and by over 60% and 70% (resp.) when compared to the best benchmark.
File in questo prodotto:
File Dimensione Formato  
TIDE__task_driven_multi_task_learning__infocom26_and_beyond__carla__gustavo__francesco__alessia_-3.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 379.37 kB
Formato Adobe PDF
379.37 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3007450