The fusion of multi-modal information, such as images and LiDAR scans, is instrumental to maximize the performance of many computer vision tasks in next generation systems and applications. However, supporting fusion necessitates considerable effort, challenging the availability of computing and communication resources in edge systems. This work addresses this challenge by maximizing resource efficiency in systems where mobile devices collect multi-modal sensor data and use dynamic multi-branched DNN models to adapt inference to the operating context. To tune the overall system response to the context (e.g., weather conditions), we propose a dual-scale control approach: centralized orchestration of spectrum resources, and distributed individual device-level control of the execution path of the dynamic DNN fusion models. The control agents are driven by a novel context-aware decision-making method combined with game theory, named Context-Aware Network Slicing Auction (CANSA), which optimizes DNN inference performance, network slicing, and energy consumption. The decision-making performs such optimization by: (i) selecting data and features that best fit the current context; (ii) deciding on the appropriate DNN model complexity, including the use of multi-modal sensor fusion techniques for better data integration, and (iii) deploying these models on the most appropriate nodes (local nodes or edge servers). Results, obtained using real-world multi-modal data, show that CANSA surpasses conventional allocation methods by up to 52.3% in terms of inference task success rate.

Distributed Context-Aware Resource Allocation for Dynamic Sensor Fusion in Edge Inference / Wu, Yashuo; Chiasserini, Carla Fabiana; Levorato, Marco. - (2025). (Intervento presentato al convegno The 22nd IEEE International Conference on Mobile Ad-Hoc and Smart Systems (MASS 2025) tenutosi a Chicago (USA) nel October 6-8, 2025).

Distributed Context-Aware Resource Allocation for Dynamic Sensor Fusion in Edge Inference

Carla Fabiana Chiasserini;
2025

Abstract

The fusion of multi-modal information, such as images and LiDAR scans, is instrumental to maximize the performance of many computer vision tasks in next generation systems and applications. However, supporting fusion necessitates considerable effort, challenging the availability of computing and communication resources in edge systems. This work addresses this challenge by maximizing resource efficiency in systems where mobile devices collect multi-modal sensor data and use dynamic multi-branched DNN models to adapt inference to the operating context. To tune the overall system response to the context (e.g., weather conditions), we propose a dual-scale control approach: centralized orchestration of spectrum resources, and distributed individual device-level control of the execution path of the dynamic DNN fusion models. The control agents are driven by a novel context-aware decision-making method combined with game theory, named Context-Aware Network Slicing Auction (CANSA), which optimizes DNN inference performance, network slicing, and energy consumption. The decision-making performs such optimization by: (i) selecting data and features that best fit the current context; (ii) deciding on the appropriate DNN model complexity, including the use of multi-modal sensor fusion techniques for better data integration, and (iii) deploying these models on the most appropriate nodes (local nodes or edge servers). Results, obtained using real-world multi-modal data, show that CANSA surpasses conventional allocation methods by up to 52.3% in terms of inference task success rate.
File in questo prodotto:
File Dimensione Formato  
Optimizing_Resource_Allocation_in_Multi_Modal_Systems-12.pdf

accesso aperto

Tipologia: 1. Preprint / submitted version [pre- review]
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 1.39 MB
Formato Adobe PDF
1.39 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3001847