Algorithms are powerful and necessary tools behind a large part of the information we use every day. However, they may introduce new sources of bias, discrimination and other unfair practices that affect people who are unaware of it. Greater algorithm transparency is indispensable to provide more credible and reliable services. Moreover, requiring developers to design transparent algorithm-driven applications allows them to keep the model accessible and human understandable, increasing the trust of end users. In this paper we present EBAnO, a new engine able to produce prediction-local explanations for a black-box model exploiting interpretable feature perturbations. EBAnO exploits the hypercolumns representation together with the cluster analysis to identify a set of interpretable features of images. Furthermore two indices have been proposed to measure the influence of input features on the final prediction made by a CNN model. EBAnO has been preliminarily tested on a set of heterogeneous images. The results highlight the effectiveness of EBAnO in explaining the CNN classification through the evaluation of interpretable features influence.

What's in the box? Explaining the black-box model through an evaluation of its interpretable features / Ventura, Francesco; Cerquitelli, Tania. - ELETTRONICO. - arXiv:1908.04348:(2019), pp. 1-5.

What's in the box? Explaining the black-box model through an evaluation of its interpretable features

Francesco Ventura;Tania Cerquitelli
2019

Abstract

Algorithms are powerful and necessary tools behind a large part of the information we use every day. However, they may introduce new sources of bias, discrimination and other unfair practices that affect people who are unaware of it. Greater algorithm transparency is indispensable to provide more credible and reliable services. Moreover, requiring developers to design transparent algorithm-driven applications allows them to keep the model accessible and human understandable, increasing the trust of end users. In this paper we present EBAnO, a new engine able to produce prediction-local explanations for a black-box model exploiting interpretable feature perturbations. EBAnO exploits the hypercolumns representation together with the cluster analysis to identify a set of interpretable features of images. Furthermore two indices have been proposed to measure the influence of input features on the final prediction made by a CNN model. EBAnO has been preliminarily tested on a set of heterogeneous images. The results highlight the effectiveness of EBAnO in explaining the CNN classification through the evaluation of interpretable features influence.
2019
File in questo prodotto:
File Dimensione Formato  
1908.04348.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: 1. Preprint / submitted version [pre- review]
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 1.19 MB
Formato Adobe PDF
1.19 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2749927
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo