Very-High Resolution (VHR) remote sensing imagery is increasingly accessible, but often lacks annotations for effective machine learning applications. Recent foundation models like GroundingDINO and Segment Anything (SAM) provide opportunities to automatically generate annotations. This study introduces FMARS (Foundation Model Annotations in Remote Sensing), a methodology leveraging VHR imagery and foundation models for fast and robust annotation. We focus on disaster management and provide a large-scale dataset with labels obtained from pre-event imagery over 19 disaster events, derived from the Maxar Open Data initiative. We train segmentation models on the generated labels, using Unsupervised Domain Adaptation (UDA) techniques to increase transferability to real-world scenarios. Our results demonstrate the effectiveness of leveraging foundation models to automatically annotate remote sensing data at scale, enabling robust downstream models for critical applications.

FMARS: Annotating Remote Sensing Images for Disaster Management using Foundation Models / Arnaudo, Edoardo; Lungo Vaschetti, Jacopo; Innocenti, Lorenzo; Barco, Luca; Lisi, Davide; Fissore, Vanina; Rossi, Claudio. - (In corso di stampa). (Intervento presentato al convegno IEEE International Symposium on Geoscience and Remote Sensing (IGARSS) 2024 tenutosi a Athens (GR) nel 7-12 July 2024).

FMARS: Annotating Remote Sensing Images for Disaster Management using Foundation Models

Arnaudo,Edoardo;Barco,Luca;
In corso di stampa

Abstract

Very-High Resolution (VHR) remote sensing imagery is increasingly accessible, but often lacks annotations for effective machine learning applications. Recent foundation models like GroundingDINO and Segment Anything (SAM) provide opportunities to automatically generate annotations. This study introduces FMARS (Foundation Model Annotations in Remote Sensing), a methodology leveraging VHR imagery and foundation models for fast and robust annotation. We focus on disaster management and provide a large-scale dataset with labels obtained from pre-event imagery over 19 disaster events, derived from the Maxar Open Data initiative. We train segmentation models on the generated labels, using Unsupervised Domain Adaptation (UDA) techniques to increase transferability to real-world scenarios. Our results demonstrate the effectiveness of leveraging foundation models to automatically annotate remote sensing data at scale, enabling robust downstream models for critical applications.
In corso di stampa
File in questo prodotto:
File Dimensione Formato  
IGARSS_2024_Annotating_Remote_Sensing_Images_using_Foundation_Models.pdf

Open Access dal 13/07/2024

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 17.75 MB
Formato Adobe PDF
17.75 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2989784