In pursuit of autonomous vehicles, achieving human- like driving behavior is vital. This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL). AA aims to safely emulate human driving to reduce the necessity for driver intervention. Focusing on the car-following scenario, the process involves: (1) extracting data from the highD natural driving study, categorizing it into three driving styles using a rule-based classifier; (2) employing deep neural network (DNN) regressors to predict human-like acceleration across styles; (3) using C-DRL, specifically the soft actor-critic Lagrangian technique, to learn human-like safe driving policies. Results indicate effectiveness in each step, with the rule-based classifier distinguishing driving styles, the regressor model accurately predicting acceleration, outperforming traditional car-following models, and C-DRL agents learning optimal policies for human-like driving across styles.
Adaptive Autopilot: Constrained DRL for Diverse Driving Behaviors / Selvaraj, Dinesh Cyril; Vitale, Christian; Panayiotou, Tania; Kolios, Panayiotis; Chiasserini, Carla Fabiana; Ellinas, Georgios. - STAMPA. - (2024), pp. 3383-3390. (Intervento presentato al convegno IEEE ITSC 2024 tenutosi a Edmonton (Can) nel 24-27 September 2024) [10.1109/ITSC58415.2024.10920172].
Adaptive Autopilot: Constrained DRL for Diverse Driving Behaviors
Dinesh Cyril Selvaraj;Carla Fabiana Chiasserini;
2024
Abstract
In pursuit of autonomous vehicles, achieving human- like driving behavior is vital. This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL). AA aims to safely emulate human driving to reduce the necessity for driver intervention. Focusing on the car-following scenario, the process involves: (1) extracting data from the highD natural driving study, categorizing it into three driving styles using a rule-based classifier; (2) employing deep neural network (DNN) regressors to predict human-like acceleration across styles; (3) using C-DRL, specifically the soft actor-critic Lagrangian technique, to learn human-like safe driving policies. Results indicate effectiveness in each step, with the rule-based classifier distinguishing driving styles, the regressor model accurately predicting acceleration, outperforming traditional car-following models, and C-DRL agents learning optimal policies for human-like driving across styles.File | Dimensione | Formato | |
---|---|---|---|
Adaptive_Autopilot_Constrained_Drl_for_Diverse_Driving_Behaviors.pdf
accesso riservato
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
1.41 MB
Formato
Adobe PDF
|
1.41 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2990667