The rapid integration of machine learning (ML) predictors into in silico medicine has revolutionized the estimation of quantities of interest that are otherwise challenging to measure directly. However, the credibility of these predictors is critical, especially when they inform high-stakes healthcare decisions. This position paper presents a consensus statement developed by experts within the In Silico World Community of Practice. We outline 12 key statements forming the theoretical foundation for evaluating the credibility of ML predictors, emphasizing the necessity of causal knowledge, rigorous error quantification, and robustness to biases. By comparing ML predictors with biophysical models, we highlight unique challenges associated with implicit causal knowledge and propose strategies to ensure reliability and applicability. Our recommendations aim to guide researchers, developers, and regulators in the rigorous assessment and deployment of ML predictors in clinical and biomedical contexts.

Consensus statement on the credibility assessment of machine learning predictors / Aldieri, Alessandra; Gamage, Thiranja Prasad Babarenda; Amedeo La Mattina, Antonino; Loewe, Axel; Pappalardo, Francesco; Viceconti, Marco. - In: BRIEFINGS IN BIOINFORMATICS. - ISSN 1467-5463. - 26:2(2025). [10.1093/bib/bbaf100]

Consensus statement on the credibility assessment of machine learning predictors

Aldieri, Alessandra;
2025

Abstract

The rapid integration of machine learning (ML) predictors into in silico medicine has revolutionized the estimation of quantities of interest that are otherwise challenging to measure directly. However, the credibility of these predictors is critical, especially when they inform high-stakes healthcare decisions. This position paper presents a consensus statement developed by experts within the In Silico World Community of Practice. We outline 12 key statements forming the theoretical foundation for evaluating the credibility of ML predictors, emphasizing the necessity of causal knowledge, rigorous error quantification, and robustness to biases. By comparing ML predictors with biophysical models, we highlight unique challenges associated with implicit causal knowledge and propose strategies to ensure reliability and applicability. Our recommendations aim to guide researchers, developers, and regulators in the rigorous assessment and deployment of ML predictors in clinical and biomedical contexts.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3001209
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo