Event cameras are novel bio-inspired sensors, which asynchronously capture pixel-level intensity changes in the form of ``events". The innovative way they acquire data presents several advantages over standard devices, especially in poor lighting and high-speed motion conditions. However, the novelty of these sensors results in the lack of a large amount of training data capable of fully unlocking their potential. The most common approach implemented by researchers to address this issue is to leverage extit{simulated event data}. Yet, this approach comes with an open research question: extit{how well simulated data generalize to real data?} To answer this, we propose to exploit, in the event-based context, recent Domain Adaptation (DA) advances in traditional computer vision, showing that DA techniques applied to event data help reduce the extit{sim-to-real} gap. To this purpose, we propose a novel architecture, which we call {Multi-View DA4E} ({MV-DA4E}), that better exploits the peculiarities of frame-based event representations while also promoting domain invariant characteristics in features. Through extensive experiments, we prove the effectiveness of DA methods and {MV-DA4E} on N-Caltech101. Moreover, we validate their soundness in a real-world scenario through a cross-domain analysis on the popular RGB-D Object Dataset (ROD), which we extended to the event modality (RGB-E).

DA4Event: towards bridging the Sim-to-Real Gap for Event Cameras using Domain Adaptation / Planamente, Mirco; Plizzari, Chiara; Cannici, Marco; Ciccone, Marco; Strada, Francesco; Bottino, Andrea; Matteucci, Matteo; Caputo, Barbara. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - STAMPA. - 6:4(2021), pp. 6616-6623. [10.1109/LRA.2021.3093870]

DA4Event: towards bridging the Sim-to-Real Gap for Event Cameras using Domain Adaptation

Planamente,Mirco;Plizzari,Chiara;Ciccone, Marco;Strada, Francesco;Bottino, Andrea;Caputo, Barbara
2021

Abstract

Event cameras are novel bio-inspired sensors, which asynchronously capture pixel-level intensity changes in the form of ``events". The innovative way they acquire data presents several advantages over standard devices, especially in poor lighting and high-speed motion conditions. However, the novelty of these sensors results in the lack of a large amount of training data capable of fully unlocking their potential. The most common approach implemented by researchers to address this issue is to leverage extit{simulated event data}. Yet, this approach comes with an open research question: extit{how well simulated data generalize to real data?} To answer this, we propose to exploit, in the event-based context, recent Domain Adaptation (DA) advances in traditional computer vision, showing that DA techniques applied to event data help reduce the extit{sim-to-real} gap. To this purpose, we propose a novel architecture, which we call {Multi-View DA4E} ({MV-DA4E}), that better exploits the peculiarities of frame-based event representations while also promoting domain invariant characteristics in features. Through extensive experiments, we prove the effectiveness of DA methods and {MV-DA4E} on N-Caltech101. Moreover, we validate their soundness in a real-world scenario through a cross-domain analysis on the popular RGB-D Object Dataset (ROD), which we extended to the event modality (RGB-E).
File in questo prodotto:
File Dimensione Formato  
IROS21.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 4.63 MB
Formato Adobe PDF
4.63 MB Adobe PDF Visualizza/Apri
DA4Event_Towards_Bridging_the_Sim-to-Real_Gap_for_Event_Cameras_Using_Domain_Adaptation.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.84 MB
Formato Adobe PDF
1.84 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2905292