The richness of information generated by today's vehicles fosters the development of data-driven decision-making models, with the additional capability to account for the context in which vehicles operate. In this work, we focus on Adaptive Cruise Control (ACC) in the case of such challenging vehicle maneuvers as cut-in and cut-out, and leverages Deep Reinforcement Learning (DRL) and vehicle connectivity to develop a data-driven cooperative ACC application. Our DRL framework accounts for all the relevant factors, namely, passengers' safety and comfort as well as efficient road capacity usage, and it properly weights them through a two-layer learning approach. We evaluate and compare the performance of the proposed scheme against existing alternatives through the CoMoVe framework, which realistically represents vehicle dynamics, communication and traffic. The results, obtained in different real-world scenarios, show that our solution provides excellent vehicle stability, passengers' comfort, and traffic efficiency, and highlight the crucial role that vehicle connectivity can play in ACC. Notably, our DRL scheme improves the road usage efficiency by being inside the desired range of headway in cut-out and cut-in scenarios for 69% and 78% (resp.) of the time, whereas alternatives respect the desired range only for 15% and 45% (resp.) of the time. We also validate the proposed solution through a hardware-in-the-loop implementation, and demonstrate that it achieves similar performance to that obtained through the CoMoVe framework.

An ML-aided Reinforcement Learning Approach for Challenging Vehicle Maneuvers / Selvaraj, Dinesh Cyril; Hegde, Shailesh; Amati, Nicola; Deflorio, Francesco; Chiasserini, Carla Fabiana. - In: IEEE TRANSACTIONS ON INTELLIGENT VEHICLES. - ISSN 2379-8858. - ELETTRONICO. - 8:2(2023), pp. 1686-1698. [10.1109/TIV.2022.3224656]

An ML-aided Reinforcement Learning Approach for Challenging Vehicle Maneuvers

Selvaraj, Dinesh Cyril;Hegde, Shailesh;Amati, Nicola;Deflorio, Francesco;Chiasserini, Carla Fabiana
2023

Abstract

The richness of information generated by today's vehicles fosters the development of data-driven decision-making models, with the additional capability to account for the context in which vehicles operate. In this work, we focus on Adaptive Cruise Control (ACC) in the case of such challenging vehicle maneuvers as cut-in and cut-out, and leverages Deep Reinforcement Learning (DRL) and vehicle connectivity to develop a data-driven cooperative ACC application. Our DRL framework accounts for all the relevant factors, namely, passengers' safety and comfort as well as efficient road capacity usage, and it properly weights them through a two-layer learning approach. We evaluate and compare the performance of the proposed scheme against existing alternatives through the CoMoVe framework, which realistically represents vehicle dynamics, communication and traffic. The results, obtained in different real-world scenarios, show that our solution provides excellent vehicle stability, passengers' comfort, and traffic efficiency, and highlight the crucial role that vehicle connectivity can play in ACC. Notably, our DRL scheme improves the road usage efficiency by being inside the desired range of headway in cut-out and cut-in scenarios for 69% and 78% (resp.) of the time, whereas alternatives respect the desired range only for 15% and 45% (resp.) of the time. We also validate the proposed solution through a hardware-in-the-loop implementation, and demonstrate that it achieves similar performance to that obtained through the CoMoVe framework.
File in questo prodotto:
File Dimensione Formato  
ML_DRL_Extension-5.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 1.05 MB
Formato Adobe PDF
1.05 MB Adobe PDF Visualizza/Apri
An_ML-Aided_Reinforcement_Learning_Approach_for_Challenging_Vehicle_Maneuvers.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 2.46 MB
Formato Adobe PDF
2.46 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2973294