Shapley Values (SVs) are concepts established for explaining black-box machine learning models by quantifying feature contributions to model predictions. Since their computation has exponential complexity in the number of features, a variety of approximated approaches to SVs estimation have been proposed. The state-of-the-art neural SVs estimator, i.e., FastSHAP, advances sampling-based methods by first training a supervised surrogate model to learn the conditional expectation of the original model given every feature subset and then generating SVs estimates through a separate network using weighted least squares. While ensuring fast SVs inference, this approach requires a significant training time, thus becoming unsuitable for scenarios where the black-box model has to be frequently retrained. To address this limitation, we propose LIGHTNINGSHAP, a cost-effective neural network-based estimator that jointly computes conditional expectations and SVs estimations to reduce training cost. The unified network learning process minimizes feature-level estimation errors while preserving SVs efficiency. Experiments run on both dynamic and static tabular datasets show that LIGHTNINGSHAP achieves a 25%-60% speedup in overall computational time (i.e., the sum of training and inference times) on medium-large datasets compared to existing neural SVs estimators as well as lower or comparable estimation errors. Furthermore, the results obtained on image datasets indicate that LIGHTNINGSHAP yields a 30%-55% speedup while preserving explanation quality.
LightningSHAP: A Cost-Effective Approach to Local Shapley Values Estimation / Napolitano, Davide; Cagliero, Luca. - In: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE. - ISSN 2691-4581. - (In corso di stampa), pp. 1-10. [10.1109/tai.2025.3645716]
LightningSHAP: A Cost-Effective Approach to Local Shapley Values Estimation
Napolitano, Davide;Cagliero, Luca
In corso di stampa
Abstract
Shapley Values (SVs) are concepts established for explaining black-box machine learning models by quantifying feature contributions to model predictions. Since their computation has exponential complexity in the number of features, a variety of approximated approaches to SVs estimation have been proposed. The state-of-the-art neural SVs estimator, i.e., FastSHAP, advances sampling-based methods by first training a supervised surrogate model to learn the conditional expectation of the original model given every feature subset and then generating SVs estimates through a separate network using weighted least squares. While ensuring fast SVs inference, this approach requires a significant training time, thus becoming unsuitable for scenarios where the black-box model has to be frequently retrained. To address this limitation, we propose LIGHTNINGSHAP, a cost-effective neural network-based estimator that jointly computes conditional expectations and SVs estimations to reduce training cost. The unified network learning process minimizes feature-level estimation errors while preserving SVs efficiency. Experiments run on both dynamic and static tabular datasets show that LIGHTNINGSHAP achieves a 25%-60% speedup in overall computational time (i.e., the sum of training and inference times) on medium-large datasets compared to existing neural SVs estimators as well as lower or comparable estimation errors. Furthermore, the results obtained on image datasets indicate that LIGHTNINGSHAP yields a 30%-55% speedup while preserving explanation quality.| File | Dimensione | Formato | |
|---|---|---|---|
|
IEEE_TAI___EfficientFastSHAP.pdf
accesso riservato
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
2.5 MB
Formato
Adobe PDF
|
2.5 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3007147
