Unsupervised Domain Adaptation (DA) exploits the supervision of a label-rich source dataset to make predictions on an unlabeled target dataset by aligning the two data distributions. In robotics, DA is used to take advantage of automatically generated synthetic data, that come with 'free' annotation, to make effective predictions on real data. However, existing DA methods are not designed to cope with the multi-modal nature of RGB-D data, which are widely used in robotic vision. We propose a novel RGB-D DA method that reduces the synthetic-to-real domain shift by exploiting the inter-modal relation between the RGB and depth image. Our method consists of training a convolutional neural network to solve, in addition to the main recognition task, the pretext task of predicting the relative rotation between the RGB and depth image. To evaluate our method and encourage further research in this area, we define two benchmark datasets for object categorization and instance recognition. With extensive experiments, we show the benefits of leveraging the inter-modal relations for RGB-D DA. The code is available at: 'https://github.com/MRLoghmani/relative-rotation'.
Unsupervised Domain Adaptation through Inter-Modal Rotation for RGB-D Object Recognition / Loghmani, M. R.; Robbiano, L.; Planamente, M.; Park, K.; Caputo, B.; Vincze, M.. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 5:4(2020), pp. 6631-6638. [10.1109/LRA.2020.3007092]
Unsupervised Domain Adaptation through Inter-Modal Rotation for RGB-D Object Recognition
Robbiano L.;Planamente M.;Caputo B.;
2020
Abstract
Unsupervised Domain Adaptation (DA) exploits the supervision of a label-rich source dataset to make predictions on an unlabeled target dataset by aligning the two data distributions. In robotics, DA is used to take advantage of automatically generated synthetic data, that come with 'free' annotation, to make effective predictions on real data. However, existing DA methods are not designed to cope with the multi-modal nature of RGB-D data, which are widely used in robotic vision. We propose a novel RGB-D DA method that reduces the synthetic-to-real domain shift by exploiting the inter-modal relation between the RGB and depth image. Our method consists of training a convolutional neural network to solve, in addition to the main recognition task, the pretext task of predicting the relative rotation between the RGB and depth image. To evaluate our method and encourage further research in this area, we define two benchmark datasets for object categorization and instance recognition. With extensive experiments, we show the benefits of leveraging the inter-modal relations for RGB-D DA. The code is available at: 'https://github.com/MRLoghmani/relative-rotation'.| File | Dimensione | Formato | |
|---|---|---|---|
| 09133308.pdf accesso riservato 
											Descrizione: Articolo principale
										 
											Tipologia:
											2a Post-print versione editoriale / Version of Record
										 
											Licenza:
											
											
												Non Pubblico - Accesso privato/ristretto
												
												
												
											
										 
										Dimensione
										1.78 MB
									 
										Formato
										Adobe PDF
									 | 1.78 MB | Adobe PDF | Visualizza/Apri Richiedi una copia | 
| IROS2020___Relative_Rotation.pdf accesso aperto 
											Tipologia:
											2. Post-print / Author's Accepted Manuscript
										 
											Licenza:
											
											
												Pubblico - Tutti i diritti riservati
												
												
												
											
										 
										Dimensione
										3.08 MB
									 
										Formato
										Adobe PDF
									 | 3.08 MB | Adobe PDF | Visualizza/Apri | 
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2846215
			
		
	
	
	
			      	