Today, the most used method for civil infrastructure inspection is based on visual assessment performed by certified inspectors following prescribed protocols. However, the increase in aggressive environmental and load conditions, coupled with the achievement for many structures of the end life-cycle, highlighted the need to automate damage identification to satisfy the number of structures that need to be inspected. To overcome this challenge, the current paper presents a method to automate the concrete damage classification using a deep Convolutional Neural Network (CNN). The CNN is designed after an experimental investigation among a wide number of pretrained networks, all applying the transfer learning technique. Training and Validation are performed using a built database with 1352 images balanced between “undamaged”, “cracked”, and “delaminated” concrete surface. To increase the network robustness compared to images with real-world situations, different configurations of images has been collected from Internet and on-field bridge inspections. The GoogLeNet model is selected as the most suitable network for the concrete damage classification, having the highest validation accuracy of about 94%. The results confirm that the proposed model can correctly classify images from real concrete surface of bridges, tunnel and pavement, resulting an effective alternative to the current visual inspection.

Automated classification of civil structures defects based on Convolutional Neural Network / Savino, Pierclaudio; Tondolo, Francesco. - In: FRONTIERS OF STRUCTURAL AND CIVIL ENGINEERING. - ISSN 2095-2430. - 15:(2021), pp. 305-317. [10.1007/s11709-021-0725-9]

Automated classification of civil structures defects based on Convolutional Neural Network

Savino, Pierclaudio;Tondolo, Francesco
2021

Abstract

Today, the most used method for civil infrastructure inspection is based on visual assessment performed by certified inspectors following prescribed protocols. However, the increase in aggressive environmental and load conditions, coupled with the achievement for many structures of the end life-cycle, highlighted the need to automate damage identification to satisfy the number of structures that need to be inspected. To overcome this challenge, the current paper presents a method to automate the concrete damage classification using a deep Convolutional Neural Network (CNN). The CNN is designed after an experimental investigation among a wide number of pretrained networks, all applying the transfer learning technique. Training and Validation are performed using a built database with 1352 images balanced between “undamaged”, “cracked”, and “delaminated” concrete surface. To increase the network robustness compared to images with real-world situations, different configurations of images has been collected from Internet and on-field bridge inspections. The GoogLeNet model is selected as the most suitable network for the concrete damage classification, having the highest validation accuracy of about 94%. The results confirm that the proposed model can correctly classify images from real concrete surface of bridges, tunnel and pavement, resulting an effective alternative to the current visual inspection.
File in questo prodotto:
File Dimensione Formato  
Automated classification of civil structure defects based on convolutional neural network.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 3.91 MB
Formato Adobe PDF
3.91 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Automated classification of civil structures defects based on Convolutional Neural Network - Marked.pdf

Open Access dal 29/04/2022

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 774.13 kB
Formato Adobe PDF
774.13 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2857092