This letter presents an approach for semantic place categorization using data obtained from RGB cameras. Previous studies on visual place recognition and classification have shown that by considering features derived from pretrained convolutional neural networks (CNNs) in combination with part-based classification models, high recognition accuracy can be achieved, even in the presence of occlusions and severe viewpoint changes. Inspired by these works, we propose to exploit local deep representations, representing images as set of regions applying a Naïve Bayes nearest neighbor (NBNN) model for image classification. As opposed to previous methods, where CNNs are merely used as feature extractors, our approach seamlessly integrates the NBNN model into a fully CNN. Experimental results show that the proposed algorithm outperforms previous methods based on pretrained CNN models and that, when employed in challenging robot place recognition tasks, it is robust to occlusions, environmental and sensor changes.

Learning Deep NBNN Representations for Robust Place Categorization / Mancini, Massimiliano; Rota Bulò, Samuel; Ricci, Elisa; Caputo, Barbara. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - ELETTRONICO. - 2:3(2017), pp. 1794-1801. [10.1109/LRA.2017.2705282]

Learning Deep NBNN Representations for Robust Place Categorization

Barbara Caputo
2017

Abstract

This letter presents an approach for semantic place categorization using data obtained from RGB cameras. Previous studies on visual place recognition and classification have shown that by considering features derived from pretrained convolutional neural networks (CNNs) in combination with part-based classification models, high recognition accuracy can be achieved, even in the presence of occlusions and severe viewpoint changes. Inspired by these works, we propose to exploit local deep representations, representing images as set of regions applying a Naïve Bayes nearest neighbor (NBNN) model for image classification. As opposed to previous methods, where CNNs are merely used as feature extractors, our approach seamlessly integrates the NBNN model into a fully CNN. Experimental results show that the proposed algorithm outperforms previous methods based on pretrained CNN models and that, when employed in challenging robot place recognition tasks, it is robust to occlusions, environmental and sensor changes.
File in questo prodotto:
File Dimensione Formato  
07930504.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 581.53 kB
Formato Adobe PDF
581.53 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2785986