Machine learning techniques can fill data gaps for urban-scale building simulations, particularly gaps around window-to-wall ratio (WWR). This study presents a comprehensive workflow to (1) automatically extract and stitch images from Google Street View (GSV); (2) label images with a custom Rhino-based tool to aid annotation of occluded glazing; (3) detect wall, garage, and glazing objects by training and validating a YOLOv9 deep learning model with three added post-scripts; (4) calculate WWR at façade, building, and district scales; and (5) simulate district energy consumption in an urban building energy model (UBEM). Results include a 96% image-capture rate from GSV, indicating a robust extraction and stitching algorithm. Converting model detections into WWR, 94% and 100% of façades have detected WWRs within ±5% and ±10% of ground truth WWRs, respectively. A novel automatic algorithm upscales façade detection to estimate WWR at non-street-facing sides and rears, resulting in distinct WWRs for each face of each building. For a case study in Turin, Italy, WWR detections are +5.2% and +6.9% greater when upscaling based on OpenStreetMap and municipal GIS data, respectively, compared to TABULA, leading to 1.5% and 35.5% increases in heating and cooling energy need in the UBEM. The workflow is made openly available to support future research in other contexts.

Detecting window-to-wall ratio for urban-scale building simulations using deep learning with street view imagery and an automatic classification algorithm / Suppa, Anthony Robert; Aliberti, Alessandro; Bottero, Marta Carla; Corrado, Vincenzo. - In: BUILDING SIMULATION. - ISSN 1996-3599. - (2025). [10.1007/s12273-025-1301-3]

Detecting window-to-wall ratio for urban-scale building simulations using deep learning with street view imagery and an automatic classification algorithm

Suppa, Anthony Robert;Aliberti, Alessandro;Bottero, Marta Carla;Corrado, Vincenzo
2025

Abstract

Machine learning techniques can fill data gaps for urban-scale building simulations, particularly gaps around window-to-wall ratio (WWR). This study presents a comprehensive workflow to (1) automatically extract and stitch images from Google Street View (GSV); (2) label images with a custom Rhino-based tool to aid annotation of occluded glazing; (3) detect wall, garage, and glazing objects by training and validating a YOLOv9 deep learning model with three added post-scripts; (4) calculate WWR at façade, building, and district scales; and (5) simulate district energy consumption in an urban building energy model (UBEM). Results include a 96% image-capture rate from GSV, indicating a robust extraction and stitching algorithm. Converting model detections into WWR, 94% and 100% of façades have detected WWRs within ±5% and ±10% of ground truth WWRs, respectively. A novel automatic algorithm upscales façade detection to estimate WWR at non-street-facing sides and rears, resulting in distinct WWRs for each face of each building. For a case study in Turin, Italy, WWR detections are +5.2% and +6.9% greater when upscaling based on OpenStreetMap and municipal GIS data, respectively, compared to TABULA, leading to 1.5% and 35.5% increases in heating and cooling energy need in the UBEM. The workflow is made openly available to support future research in other contexts.
File in questo prodotto:
File Dimensione Formato  
s12273-025-1301-3.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 5.43 MB
Formato Adobe PDF
5.43 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3001390