Physics simulators have shown great promise for conveniently learning reinforcement learning policies in safe, unconstrained environments. However, transferring the acquired knowledge to the real world can be challenging due to the reality gap. To this end, several methods have been recently proposed to automatically tune simulator parameters with posterior distributions given real data, for use with domain randomization at training time. These approaches have been shown to work for various robotic tasks under different settings and assumptions. Nevertheless, existing literature lacks a thorough comparison of existing adaptive domain randomization methods with respect to transfer performance and real-data efficiency. In this work, we present an open benchmark for both offline and online methods (SimOpt, BayRn, DROID, DROPO), to shed light on which are most suitable for each setting and task at hand. We found that online methods are limited by the quality of the currently learned policy for the next iteration, while offline methods may sometimes fail when replaying trajectories in simulation with open-loop commands. The code used will be released at https://github.com/gabrieletiboni/adr-benchmark.

Online vs. Offline Adaptive Domain Randomization Benchmark / Tiboni, Gabriele; Arndt, Karol; Averta, Giuseppe; Kyrki, Ville; Tommasi, Tatiana. - ELETTRONICO. - (2023), pp. 158-173. (Intervento presentato al convegno Human-Friendly Robotics 2022 - HFR; 15th International Workshop on Human Friendly Robotics tenutosi a Delft (The Netherlands) nel 22- 23 September 2022) [10.1007/978-3-031-22731-8_12].

Online vs. Offline Adaptive Domain Randomization Benchmark

Gabriele Tiboni;Giuseppe Averta;Tatiana Tommasi
2023

Abstract

Physics simulators have shown great promise for conveniently learning reinforcement learning policies in safe, unconstrained environments. However, transferring the acquired knowledge to the real world can be challenging due to the reality gap. To this end, several methods have been recently proposed to automatically tune simulator parameters with posterior distributions given real data, for use with domain randomization at training time. These approaches have been shown to work for various robotic tasks under different settings and assumptions. Nevertheless, existing literature lacks a thorough comparison of existing adaptive domain randomization methods with respect to transfer performance and real-data efficiency. In this work, we present an open benchmark for both offline and online methods (SimOpt, BayRn, DROID, DROPO), to shed light on which are most suitable for each setting and task at hand. We found that online methods are limited by the quality of the currently learned policy for the next iteration, while offline methods may sometimes fail when replaying trajectories in simulation with open-loop commands. The code used will be released at https://github.com/gabrieletiboni/adr-benchmark.
2023
978-3-031-22730-1
978-3-031-22733-2
978-3-031-22731-8
File in questo prodotto:
File Dimensione Formato  
Delft_HFR_Workshop_paper_submission_FINAL.pdf

Open Access dal 03/01/2024

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 3.94 MB
Formato Adobe PDF
3.94 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2971677