The increasing use of Deep Learning (DL) in healthcare has highlighted the critical need for improved transparency and interpretability. While Explainable Artificial Intelligence (XAI) methods provide insights into model predictions, reliability cannot be guaranteed by simply relying on explanations. Objectives: This position paper proposes the integration of Uncertainty Quantification (UQ) with XAI methods to improve model reliability and trustworthiness in healthcare applications. Methods: We examine state-of-the-art XAI and UQ techniques, discuss implementation challenges, and suggest solutions to combine UQ with XAI methods. We propose a framework for estimating both aleatoric and epistemic uncertainty in the XAI context, providing illustrative examples of their potential application. Results: Our analysis indicates that integrating UQ with XAI could significantly enhance the reliability of DL models in practice. This approach has the potential to reduce interpretation biases and over-reliance, leading to more cautious and conscious use of AI in healthcare.

Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare / Salvi, Massimo; Seoni, Silvia; Campagner, Andrea; Gertych, Arkadiusz; Acharya, U. Rajendra; Molinari, Filippo; Cabitza, Federico. - In: INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS. - ISSN 1386-5056. - 197:(2025). [10.1016/j.ijmedinf.2025.105846]

Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare

Salvi, Massimo;Seoni, Silvia;Molinari, Filippo;
2025

Abstract

The increasing use of Deep Learning (DL) in healthcare has highlighted the critical need for improved transparency and interpretability. While Explainable Artificial Intelligence (XAI) methods provide insights into model predictions, reliability cannot be guaranteed by simply relying on explanations. Objectives: This position paper proposes the integration of Uncertainty Quantification (UQ) with XAI methods to improve model reliability and trustworthiness in healthcare applications. Methods: We examine state-of-the-art XAI and UQ techniques, discuss implementation challenges, and suggest solutions to combine UQ with XAI methods. We propose a framework for estimating both aleatoric and epistemic uncertainty in the XAI context, providing illustrative examples of their potential application. Results: Our analysis indicates that integrating UQ with XAI could significantly enhance the reliability of DL models in practice. This approach has the potential to reduce interpretation biases and over-reliance, leading to more cautious and conscious use of AI in healthcare.
File in questo prodotto:
File Dimensione Formato  
(2025) paper - XAI_UQ position.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 3.31 MB
Formato Adobe PDF
3.31 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2997766