In pursuit of autonomous vehicles, achieving human- like driving behavior is vital. This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL). AA aims to safely emulate human driving to reduce the necessity for driver intervention. Focusing on the car-following scenario, the process involves: (1) extracting data from the highD natural driving study, categorizing it into three driving styles using a rule-based classifier; (2) employing deep neural network (DNN) regressors to predict human-like acceleration across styles; (3) using C-DRL, specifically the soft actor-critic Lagrangian technique, to learn human-like safe driving policies. Results indicate effectiveness in each step, with the rule-based classifier distinguishing driving styles, the regressor model accurately predicting acceleration, outperforming traditional car-following models, and C-DRL agents learning optimal policies for human-like driving across styles.

Adaptive Autopilot: Constrained DRL for Diverse Driving Behaviors / Selvaraj, DINESH CYRIL; Vitale, Christian; Panayiotou, Tania; Kolios, Panayiotis; Chiasserini, Carla Fabiana; Ellinas, Georgios. - STAMPA. - (2024). (Intervento presentato al convegno IEEE ITSC 2024 tenutosi a Edmonton (Canada) nel Sept. 2024).

Adaptive Autopilot: Constrained DRL for Diverse Driving Behaviors

Dinesh Cyril Selvaraj;Carla Fabiana Chiasserini;
2024

Abstract

In pursuit of autonomous vehicles, achieving human- like driving behavior is vital. This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL). AA aims to safely emulate human driving to reduce the necessity for driver intervention. Focusing on the car-following scenario, the process involves: (1) extracting data from the highD natural driving study, categorizing it into three driving styles using a rule-based classifier; (2) employing deep neural network (DNN) regressors to predict human-like acceleration across styles; (3) using C-DRL, specifically the soft actor-critic Lagrangian technique, to learn human-like safe driving policies. Results indicate effectiveness in each step, with the rule-based classifier distinguishing driving styles, the regressor model accurately predicting acceleration, outperforming traditional car-following models, and C-DRL agents learning optimal policies for human-like driving across styles.
File in questo prodotto:
File Dimensione Formato  
Dinesh_KIOS_VNC.pdf

accesso aperto

Tipologia: 1. Preprint / submitted version [pre- review]
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 905.4 kB
Formato Adobe PDF
905.4 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2990667