Collaborative perception, using vehicle-to-everything (V2X) communication to share sensor data between connected vehicles and between the latter and the network infrastructure, has emerged as a prominent solution to extend the view of single autonomous vehicles. The effectiveness of this paradigm, however, may be hindered by the presence of adverse weather conditions and changes in lighting, often affecting real-world scenarios. Thus, assessing the robustness of collaborative perception to environmental contingencies is still an open issue. Importantly, although some large-scale datasets for collaborative perception, comprising realistic and simulated data, are now publicly available, most of them lack diversity in terms of environmental conditions in the autonomous driving scenarios they collect, making it difficult for researchers to assess how such conditions may affect perception performance. We thus introduce ConVeX, an extensive multi-agent synthetic dataset for collaborative perception (CP) that reproduces different realistic driving scenarios (urban, rural, highway), road layouts, and weather and lighting conditions. Remarkably, ConVex includes multi-modal data (i.e., images from RGB (Red-Green-Blue) cameras, LiDAR (Light Detection and Ranging) points, and GNSS (Global Navigation Satellite System) coordinates) collected by different vehicles, and ground-truth annotations for object detection.

Descriptor: Context-Aware Collaborative Perception in Autonomous Driving Dataset (ConVeX) / Palena, Marco; Selvaraj, Dinesh Cyril; Chiasserini, Carla Fabiana; Cerquitelli, Tania. - In: IEEE DATA DESCRIPTIONS. - ISSN 2995-4274. - (2026).

Descriptor: Context-Aware Collaborative Perception in Autonomous Driving Dataset (ConVeX)

Marco Palena;Dinesh Cyril Selvaraj;Carla Fabiana Chiasserini;Tania Cerquitelli
2026

Abstract

Collaborative perception, using vehicle-to-everything (V2X) communication to share sensor data between connected vehicles and between the latter and the network infrastructure, has emerged as a prominent solution to extend the view of single autonomous vehicles. The effectiveness of this paradigm, however, may be hindered by the presence of adverse weather conditions and changes in lighting, often affecting real-world scenarios. Thus, assessing the robustness of collaborative perception to environmental contingencies is still an open issue. Importantly, although some large-scale datasets for collaborative perception, comprising realistic and simulated data, are now publicly available, most of them lack diversity in terms of environmental conditions in the autonomous driving scenarios they collect, making it difficult for researchers to assess how such conditions may affect perception performance. We thus introduce ConVeX, an extensive multi-agent synthetic dataset for collaborative perception (CP) that reproduces different realistic driving scenarios (urban, rural, highway), road layouts, and weather and lighting conditions. Remarkably, ConVex includes multi-modal data (i.e., images from RGB (Red-Green-Blue) cameras, LiDAR (Light Detection and Ranging) points, and GNSS (Global Navigation Satellite System) coordinates) collected by different vehicles, and ground-truth annotations for object detection.
2026
File in questo prodotto:
File Dimensione Formato  
IEEE_Data_Description_Autonomous_Driving-6_mini.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 410.19 kB
Formato Adobe PDF
410.19 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3010047