Background and objectives: Artificial intelligence (AI) has branched out to various applications in health-care, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community.Methods: Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. Results: In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others.Conclusion: We discovered that detecting abnormalities in 1D biosignals and identifying key text in clini-cal notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.(c) 2022 Elsevier B.V. All rights reserved.

Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022) / Loh, Hui Wen; Ooi, Chui Ping; Seoni, Silvia; Barua, Prabal Datta; Molinari, Filippo; Acharya, U Rajendra. - In: COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE. - ISSN 0169-2607. - ELETTRONICO. - 226:(2022). [10.1016/j.cmpb.2022.107161]

Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022)

Seoni, Silvia;Molinari, Filippo;
2022

Abstract

Background and objectives: Artificial intelligence (AI) has branched out to various applications in health-care, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community.Methods: Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. Results: In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others.Conclusion: We discovered that detecting abnormalities in 1D biosignals and identifying key text in clini-cal notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.(c) 2022 Elsevier B.V. All rights reserved.
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0169260722005429-main.pdf

non disponibili

Descrizione: XAI 4 healthcare - post
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 2.52 MB
Formato Adobe PDF
2.52 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2974267