Very-High Resolution (VHR) remote sensing imagery is increasingly accessible, but often lacks annotations for effective machine learning applications. Recent foundation models like GroundingDINO and Segment Anything (SAM) provide opportunities to automatically generate annotations. This study introduces FMARS (Foundation Model Annotations in Remote Sensing), a methodology leveraging VHR imagery and foundation models for fast and robust annotation. We focus on disaster management and provide a large-scale dataset with labels obtained from pre-event imagery over 19 disaster events, derived from the Maxar Open Data initiative. We train segmentation models on the generated labels, using Unsupervised Domain Adaptation (UDA) techniques to increase transferability to real-world scenarios. Our results demonstrate the effectiveness of leveraging foundation models to automatically annotate remote sensing data at scale, enabling robust downstream models for critical applications.
FMARS: Annotating Remote Sensing Images for Disaster Management using Foundation Models / Arnaudo, Edoardo; Lungo Vaschetti, Jacopo; Innocenti, Lorenzo; Barco, Luca; Lisi, Davide; Fissore, Vanina; Rossi, Claudio. - (2024), pp. 3920-3924. (Intervento presentato al convegno IEEE International Symposium on Geoscience and Remote Sensing (IGARSS) 2024 tenutosi a Athens (GR) nel 7-12 July 2024) [10.1109/IGARSS53475.2024.10641130].
FMARS: Annotating Remote Sensing Images for Disaster Management using Foundation Models
Arnaudo,Edoardo;Barco,Luca;
2024
Abstract
Very-High Resolution (VHR) remote sensing imagery is increasingly accessible, but often lacks annotations for effective machine learning applications. Recent foundation models like GroundingDINO and Segment Anything (SAM) provide opportunities to automatically generate annotations. This study introduces FMARS (Foundation Model Annotations in Remote Sensing), a methodology leveraging VHR imagery and foundation models for fast and robust annotation. We focus on disaster management and provide a large-scale dataset with labels obtained from pre-event imagery over 19 disaster events, derived from the Maxar Open Data initiative. We train segmentation models on the generated labels, using Unsupervised Domain Adaptation (UDA) techniques to increase transferability to real-world scenarios. Our results demonstrate the effectiveness of leveraging foundation models to automatically annotate remote sensing data at scale, enabling robust downstream models for critical applications.File | Dimensione | Formato | |
---|---|---|---|
FMARS_Annotating_Remote_Sensing_Images_for_Disaster_Management_Using_Foundation_Models.pdf
non disponibili
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
376.17 kB
Formato
Adobe PDF
|
376.17 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
IGARSS_2024_Annotating_Remote_Sensing_Images_using_Foundation_Models.pdf
accesso aperto
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
PUBBLICO - Tutti i diritti riservati
Dimensione
359.64 kB
Formato
Adobe PDF
|
359.64 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2989784