Diagnosing ischemic stroke from computed tomography (CT) images is a highly challenging and detailed process that requires precise and careful analysis by a medical professional. Deep learning techniques offer an effective solution to this issue because of their remarkable performance. Nevertheless, most of those methods still lack the uncertainty quantification (UQ) and eXplainable artificial intelligence (XAI) features, which are essential for clinical practice and acceptance. We present TrustNet, a small but powerful convolutional neural network that uses Monte Carlo dropout and quantitative Grad-CAM. This technique helps visualize the issues related to two independent factors: uncertainty in the model’s classification and inconsistency in recognizing the relevant visual features. The model was validated on a set of 2023 brain CT scans and compared with networks that are generally used for classification purposes. TrustNet was able to achieve an accuracy of 94.67%, with 100% specificity, 91.6% sensitivity, and 100% precision, competing against various conventional architectures. The introduction of the UQ and XAI methods led to a consistent performance enhancement over the baseline models by limiting the number incorrect predictions, which is crucial for stroke diagnosis. With this performance, our approach can also provide an explanation for the reasoning and estimate confidence, which is essential for model deployment. This method is an indispensable tool for eliminating diagnostic bias and thus controlling the safety of AI in the clinical workflow

TrustNet: a lightweight network with integrated uncertainty quantification and quantitative explainable AI for ischemic stroke detection in CT images / Inamdar, Mahesh Anil; Gudigar, Anjan; Raghavendra, U.; Kaprekar, Aryaman; Salvi, Massimo; Seoni, Silvia; Menon, Girish R.; Molinari, Filippo; Acharya, U. R.. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 16:1(2026). [10.1038/s41598-026-37169-8]

TrustNet: a lightweight network with integrated uncertainty quantification and quantitative explainable AI for ischemic stroke detection in CT images

Salvi, Massimo;Seoni, Silvia;Molinari, Filippo;
2026

Abstract

Diagnosing ischemic stroke from computed tomography (CT) images is a highly challenging and detailed process that requires precise and careful analysis by a medical professional. Deep learning techniques offer an effective solution to this issue because of their remarkable performance. Nevertheless, most of those methods still lack the uncertainty quantification (UQ) and eXplainable artificial intelligence (XAI) features, which are essential for clinical practice and acceptance. We present TrustNet, a small but powerful convolutional neural network that uses Monte Carlo dropout and quantitative Grad-CAM. This technique helps visualize the issues related to two independent factors: uncertainty in the model’s classification and inconsistency in recognizing the relevant visual features. The model was validated on a set of 2023 brain CT scans and compared with networks that are generally used for classification purposes. TrustNet was able to achieve an accuracy of 94.67%, with 100% specificity, 91.6% sensitivity, and 100% precision, competing against various conventional architectures. The introduction of the UQ and XAI methods led to a consistent performance enhancement over the baseline models by limiting the number incorrect predictions, which is crucial for stroke diagnosis. With this performance, our approach can also provide an explanation for the reasoning and estimate confidence, which is essential for model deployment. This method is an indispensable tool for eliminating diagnostic bias and thus controlling the safety of AI in the clinical workflow
File in questo prodotto:
File Dimensione Formato  
(2026) paper - Trustnet XAI-UQ brain stroke.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 3.9 MB
Formato Adobe PDF
3.9 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3009288