The edge computing paradigm has opened new opportunities for IoT devices, which can be used in novel applications involving heavy processing of data. Typical and common examples of IoT devices are the Unmanned Aerial Vehicles (UAVs), which are deployed for surveillance and environmental monitoring and are attracting increasing attention because of their ease deployment. However, their limited capacity, e.g., battery, forces the design of an edge-assisted solution, where heavy tasks are offloaded to the edge cloud. To solve the problem of task offloading from UAV to the closest edge computation, many proposals have appeared, mainly based on a Reinforcement Learning (RL) formulation. While these solutions successfully learn how to reduce task completion time in the UAV context, some limitations appear when these models are applied in real scenarios, given the memory-hungry nature of RL. To this end, we propose a simple yet effective formalization that still enables a learning process, but reduces the required information and the training time. Our evaluation results confirm our hypothesis, showing a marked improvement when compared to other RL-based strategies and deep learning-based solutions.
A Self-Learning Strategy for Task Offloading in UAV Networks / Sacco, Alessio; Esposito, Flavio; Marchetto, Guido; Montuschi, Paolo. - In: IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY. - ISSN 1939-9359. - ELETTRONICO. - 71:4(2022), pp. 4301-4311. [10.1109/TVT.2022.3144654]
A Self-Learning Strategy for Task Offloading in UAV Networks
Alessio Sacco;Guido Marchetto;Paolo Montuschi
2022
Abstract
The edge computing paradigm has opened new opportunities for IoT devices, which can be used in novel applications involving heavy processing of data. Typical and common examples of IoT devices are the Unmanned Aerial Vehicles (UAVs), which are deployed for surveillance and environmental monitoring and are attracting increasing attention because of their ease deployment. However, their limited capacity, e.g., battery, forces the design of an edge-assisted solution, where heavy tasks are offloaded to the edge cloud. To solve the problem of task offloading from UAV to the closest edge computation, many proposals have appeared, mainly based on a Reinforcement Learning (RL) formulation. While these solutions successfully learn how to reduce task completion time in the UAV context, some limitations appear when these models are applied in real scenarios, given the memory-hungry nature of RL. To this end, we propose a simple yet effective formalization that still enables a learning process, but reduces the required information and the training time. Our evaluation results confirm our hypothesis, showing a marked improvement when compared to other RL-based strategies and deep learning-based solutions.File | Dimensione | Formato | |
---|---|---|---|
FINAL VERSION.pdf
accesso aperto
Descrizione: Articolo principale
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Pubblico - Tutti i diritti riservati
Dimensione
1.38 MB
Formato
Adobe PDF
|
1.38 MB | Adobe PDF | Visualizza/Apri |
A_Self-Learning_Strategy_for_Task_Offloading_in_UAV_Networks.pdf
accesso riservato
Descrizione: Articolo principale
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
1.31 MB
Formato
Adobe PDF
|
1.31 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2955120