Software bias has emerged as a relevant issue in the latest years, in conjunction with the increasing adoption of software automation in a variety of organizational and production processes of our society, and especially in decision-making. Among the causes of software bias, data imbalance is one of the most significant issues. In this paper, we treat imbalance in datasets as a risk factor for software bias. Specifically, we define a methodology to identify thresholds for balance measures as meaningful risk indicators of unfair classification output. We apply the methodology to a large number of data mutations with different classification tasks and tested all possible combinations of balance-unfairness-algorithm. The results show that on average the thresholds can accurately identify the risk of unfair output. In certain cases they even tend to overestimate the risk: although such behavior could be instrumental to a prudential approach towards software discrimination, further work will be devoted to better assess the reliability of the thresholds. The proposed methodology is generic and it can be applied to different datasets, algorithms, and context-specific thresholds.

Identifying Imbalance Thresholds in Input Data to Achieve Desired Levels of Algorithmic Fairness / Mecati, Mariachiara; Adrignola, Andrea; Vetro, Antonio; Torchiano, Marco. - (2022), pp. 4700-4709. (Intervento presentato al convegno 2022 IEEE International Conference on Big Data (IEEE BigData 2022) - Second International Workshop on Data Science for equality, inclusion and well-being challenges (DS4EIW 2022) tenutosi a Osaka (Japan) nel 17-20 December 2022) [10.1109/BigData55660.2022.10021078].

Identifying Imbalance Thresholds in Input Data to Achieve Desired Levels of Algorithmic Fairness

Mecati, Mariachiara;Vetro, Antonio;Torchiano, Marco
2022

Abstract

Software bias has emerged as a relevant issue in the latest years, in conjunction with the increasing adoption of software automation in a variety of organizational and production processes of our society, and especially in decision-making. Among the causes of software bias, data imbalance is one of the most significant issues. In this paper, we treat imbalance in datasets as a risk factor for software bias. Specifically, we define a methodology to identify thresholds for balance measures as meaningful risk indicators of unfair classification output. We apply the methodology to a large number of data mutations with different classification tasks and tested all possible combinations of balance-unfairness-algorithm. The results show that on average the thresholds can accurately identify the risk of unfair output. In certain cases they even tend to overestimate the risk: although such behavior could be instrumental to a prudential approach towards software discrimination, further work will be devoted to better assess the reliability of the thresholds. The proposed methodology is generic and it can be applied to different datasets, algorithms, and context-specific thresholds.
File in questo prodotto:
File Dimensione Formato  
S34205.pdf

accesso riservato

Descrizione: Manuscript
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.11 MB
Formato Adobe PDF
1.11 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Paper_DS4EIW2022.pdf

accesso aperto

Descrizione: manuscript
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 1.08 MB
Formato Adobe PDF
1.08 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2974777