Panoramic cameras offer a 4π steradian field of view, which is desirable for tasks like people detection and tracking since nobody can exit the field of view. Despite the recent diffusion of low-cost panoramic cameras, their usage in robotics remains constrained by the limited availability of datasets featuring annotations in the robot space, including people's 2D or 3D positions. To tackle this issue, we introduce PanNote, an automatic annotation tool for people's positions in panoramic videos. Our tool is designed to be cost-effective and straightforward to use without requiring human intervention during the labeling process and enabling the training of machine learning models with low effort. The proposed method introduces a calibration model and a data association algorithm to fuse data from panoramic images and 2D LiDAR readings. We validate the capabilities of PanNote by collecting a real-world dataset. On these data, we compared manual labels, automatic labels and the predictions of a baseline deep neural network. Results clearly show the advantage of using our method, with a 15-fold speed up in labeling time and a considerable gain in performance while training deep neural models on automatically labelled data.

PanNote: An Automatic Tool for Panoramic Image Annotation of People's Positions / Bacchin, A.; Barcellona, L.; Shamsizadeh, S.; Olivastri, E.; Pretto, A.; Menegatti, E.. - (2024), pp. 17006-17012. (Intervento presentato al convegno 2024 IEEE International Conference on Robotics and Automation, ICRA 2024 tenutosi a Yokohama (JPN) nel 13-17 May 2024) [10.1109/ICRA57147.2024.10610347].

PanNote: An Automatic Tool for Panoramic Image Annotation of People's Positions

Barcellona L.;
2024

Abstract

Panoramic cameras offer a 4π steradian field of view, which is desirable for tasks like people detection and tracking since nobody can exit the field of view. Despite the recent diffusion of low-cost panoramic cameras, their usage in robotics remains constrained by the limited availability of datasets featuring annotations in the robot space, including people's 2D or 3D positions. To tackle this issue, we introduce PanNote, an automatic annotation tool for people's positions in panoramic videos. Our tool is designed to be cost-effective and straightforward to use without requiring human intervention during the labeling process and enabling the training of machine learning models with low effort. The proposed method introduces a calibration model and a data association algorithm to fuse data from panoramic images and 2D LiDAR readings. We validate the capabilities of PanNote by collecting a real-world dataset. On these data, we compared manual labels, automatic labels and the predictions of a baseline deep neural network. Results clearly show the advantage of using our method, with a 15-fold speed up in labeling time and a considerable gain in performance while training deep neural models on automatically labelled data.
2024
979-8-3503-8457-4
File in questo prodotto:
File Dimensione Formato  
PanNote_an_Automatic_Tool_for_Panoramic_Image_Annotation_of_Peoples_Positions.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 6.68 MB
Formato Adobe PDF
6.68 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2992254