Unmanned Aerial Systems (UASs) have become a relevant sector in the aerospace industry. In the last decade, the increase in the capabilities of Unmanned Aerial Vehicles (UAVs), paired with a drop in their price, has led them to be used in many different applications where they are employed for their versatility and efficiency. A challenge that is being addressed in this field is that of autonomous UAVs fleets, i.e., the coordinated use of multiple UAVs to perform a common task. A particularly interesting application of UAV fleets is their use in the exploration and mapping of unknown or critical environments. This topic brings with it a significant number of challenges, from the design of the policy used to coordinate the fleet to the path planning algorithm that each UAV uses to move in the environment while exploring it. In this paper, a Reinforcement Learning (RL)-based approach for the cooperative exploration of unknown environments by a fleet of UAVs is presented. Two RL agents are trained to address this problem: the first has the task of coordinating the exploration task, optimizing the way the UAVs spread in the unknown area by assigning some waypoints to them. The waypoints are placed in order to optimize the distribution of the fleet and to maximize the exploration process efficiency. The second RL agent is a path planning algorithm and is used by each UAV to move in the environment to reach the region pointed by the first agent. The combined use of the two agents allows the fleet to coordinate in the execution of the exploration task.
RL-based Path Planning for Autonomous Aerial Vehicles in Unknown Environments / Battocletti, Gianpetro; Urban, Riccardo; Godio, Simone; Guglieri, Giorgio. - ELETTRONICO. - (2021). (Intervento presentato al convegno AIAA Aviation FOrum tenutosi a Virtual Event nel August 2-6, 2021) [10.2514/6.2021-3016].
RL-based Path Planning for Autonomous Aerial Vehicles in Unknown Environments
Simone Godio;Giorgio Guglieri
2021
Abstract
Unmanned Aerial Systems (UASs) have become a relevant sector in the aerospace industry. In the last decade, the increase in the capabilities of Unmanned Aerial Vehicles (UAVs), paired with a drop in their price, has led them to be used in many different applications where they are employed for their versatility and efficiency. A challenge that is being addressed in this field is that of autonomous UAVs fleets, i.e., the coordinated use of multiple UAVs to perform a common task. A particularly interesting application of UAV fleets is their use in the exploration and mapping of unknown or critical environments. This topic brings with it a significant number of challenges, from the design of the policy used to coordinate the fleet to the path planning algorithm that each UAV uses to move in the environment while exploring it. In this paper, a Reinforcement Learning (RL)-based approach for the cooperative exploration of unknown environments by a fleet of UAVs is presented. Two RL agents are trained to address this problem: the first has the task of coordinating the exploration task, optimizing the way the UAVs spread in the unknown area by assigning some waypoints to them. The waypoints are placed in order to optimize the distribution of the fleet and to maximize the exploration process efficiency. The second RL agent is a path planning algorithm and is used by each UAV to move in the environment to reach the region pointed by the first agent. The combined use of the two agents allows the fleet to coordinate in the execution of the exploration task.| File | Dimensione | Formato | |
|---|---|---|---|
|
6.2021-3016.pdf
accesso riservato
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
8.97 MB
Formato
Adobe PDF
|
8.97 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2918600
