Estimating global tree canopy height is crucial for forest conservation and climate change applications. However, capturing high-resolution ground truth canopy height using LiDAR is expensive and not available globally. An efficient alternative is to train a canopy height estimator to operate on single-view remotely sensed imagery. The primary obstacle to this approach is that these methods require significant training data to generalize well globally and across uncommon edge cases. Recent monocular depth estimation foundation models have shown strong zero-shot performance even for complex scenes. In this paper, we leverage the representations learned by these models to transfer to the remote sensing domain for measuring canopy height. Our findings suggest that our proposed Depth Any Canopy, the result of fine-tuning the Depth Anything v2 model for canopy height estimation, provides a performant and efficient solution, surpassing the current state-of-the-art with superior or comparable performance using only a fraction of the computational resources and parameters. Furthermore, our approach requires less than $1.30 in compute and results in an estimated carbon footprint of 0.14 kgCO. Code, experimental results, and model checkpoints are openly available at github.com/DarthReca/depth-any-canopy.

Depth Any Canopy: Leveraging Depth Foundation Models for Canopy Height Estimation / Rege Cambrin, Daniele; Corley, Isaac; Garza, Paolo. - 15624:(2025), pp. 71-86. (Intervento presentato al convegno European Conference on Computer Vision tenutosi a Milan (ITA) nel September 29–October 4, 2024) [10.1007/978-3-031-92387-6_5].

Depth Any Canopy: Leveraging Depth Foundation Models for Canopy Height Estimation

Daniele Rege Cambrin;Paolo Garza
2025

Abstract

Estimating global tree canopy height is crucial for forest conservation and climate change applications. However, capturing high-resolution ground truth canopy height using LiDAR is expensive and not available globally. An efficient alternative is to train a canopy height estimator to operate on single-view remotely sensed imagery. The primary obstacle to this approach is that these methods require significant training data to generalize well globally and across uncommon edge cases. Recent monocular depth estimation foundation models have shown strong zero-shot performance even for complex scenes. In this paper, we leverage the representations learned by these models to transfer to the remote sensing domain for measuring canopy height. Our findings suggest that our proposed Depth Any Canopy, the result of fine-tuning the Depth Anything v2 model for canopy height estimation, provides a performant and efficient solution, surpassing the current state-of-the-art with superior or comparable performance using only a fraction of the computational resources and parameters. Furthermore, our approach requires less than $1.30 in compute and results in an estimated carbon footprint of 0.14 kgCO. Code, experimental results, and model checkpoints are openly available at github.com/DarthReca/depth-any-canopy.
2025
978-3-031-92386-9
978-3-031-92387-6
File in questo prodotto:
File Dimensione Formato  
ECCV_2024_CV4E.pdf

embargo fino al 12/05/2026

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 10.12 MB
Formato Adobe PDF
10.12 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-031-92387-6_5.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 3.79 MB
Formato Adobe PDF
3.79 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2992546