Visual Place Recognition is a task that aims to predict the coordinates of an image (called query) based solely on visual clues. Most commonly, a retrieval approach is adopted, where the query is matched to the most similar images from a large database of geotagged photos, using learned global descriptors. Despite recent advances, recognizing the same place when the query comes from a significantly different distribution is still a major hurdle for state of the art retrieval methods. Examples are heavy illumination changes (e.g. night-time images) or substantial occlusions (e.g. transient objects) . In this work we explore whether re-ranking methods based on spatial verification can tackle these challenges, following the intuition that local descriptors are inherently more robust than global features to domain shifts. To this end, we provide a new, comprehensive benchmark on current state of the art models. We also introduce two new demanding datasets with night and occluded queries, to be matched against a citywide database. Code and datasets are available at https://github.com/gbarbarani/re-ranking-for-VPR.
Are Local Features All You Need for Cross-Domain Visual Place Recognition? / Barbarani, Giovanni; Mostafa, Mohamad; Bayramov, Hajali; Trivigno, Gabriele; Berton, Gabriele; Masone, Carlo; Caputo, Barbara. - (2023), pp. 6155-6165. (Intervento presentato al convegno Conference on Computer Vision and Pattern Recognition (CVPR 2023) tenutosi a Vancouver (CAN) nel 18-22 June 2023) [10.1109/CVPRW59228.2023.00655].
Are Local Features All You Need for Cross-Domain Visual Place Recognition?
Trivigno, Gabriele;Berton, Gabriele;Masone, Carlo;Caputo, Barbara
2023
Abstract
Visual Place Recognition is a task that aims to predict the coordinates of an image (called query) based solely on visual clues. Most commonly, a retrieval approach is adopted, where the query is matched to the most similar images from a large database of geotagged photos, using learned global descriptors. Despite recent advances, recognizing the same place when the query comes from a significantly different distribution is still a major hurdle for state of the art retrieval methods. Examples are heavy illumination changes (e.g. night-time images) or substantial occlusions (e.g. transient objects) . In this work we explore whether re-ranking methods based on spatial verification can tackle these challenges, following the intuition that local descriptors are inherently more robust than global features to domain shifts. To this end, we provide a new, comprehensive benchmark on current state of the art models. We also introduce two new demanding datasets with night and occluded queries, to be matched against a citywide database. Code and datasets are available at https://github.com/gbarbarani/re-ranking-for-VPR.File | Dimensione | Formato | |
---|---|---|---|
2023_CVPRW_Visual_Geolocalization_at_night.pdf
accesso aperto
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Pubblico - Tutti i diritti riservati
Dimensione
3.18 MB
Formato
Adobe PDF
|
3.18 MB | Adobe PDF | Visualizza/Apri |
Are_Local_Features_All_You_Need_for_Cross-Domain_Visual_Place_Recognition.pdf
accesso riservato
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
3.86 MB
Formato
Adobe PDF
|
3.86 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2979101