Effectively selecting data from subgroups where a model performs poorly is crucial for improving its performance. Traditional methods for identifying these subgroups often rely on sensitive information, raising privacy issues. Additionally, gathering such information at runtime might be impractical. This paper introduces a cost-effective strategy that addresses these concerns. We identify underperforming subgroups and train a model to predict if an utterance belongs to these subgroups without needing sensitive information. This model helps mitigate bias by selecting and adding new data, which is labeled as challenging, for re-training the speech model. Experimental results on intent classification and automatic speech recognition tasks show the effectiveness of our approach in reducing biases and enhancing performance, with improvements in reducing error rates of up to 39% for FSC, 16% for ITALIC, and 22% for LibriSpeech.

Privacy Preserving Data Selection for Bias Mitigation in Speech Models / Koudounas, Alkis; Pastor, Eliana; Mazzia, Vittorio; Giollo, Manuel; Gueudre, Thomas; Reale, Elisa; Cagliero, Luca; Cumani, Sandro; De Alfaro, Luca; Baralis, Elena; Amberti, Daniele. - 6: Industry track:(2025), pp. 738-748. (Intervento presentato al convegno 63rd Annual Meeting of the Association for Computational Linguistics: ACL 2025 tenutosi a Vienna (AT) nel 27Jul - 1 Aug 2025).

Privacy Preserving Data Selection for Bias Mitigation in Speech Models

Koudounas, Alkis;Pastor, Eliana;Cagliero, Luca;Cumani, Sandro;Baralis, Elena;
2025

Abstract

Effectively selecting data from subgroups where a model performs poorly is crucial for improving its performance. Traditional methods for identifying these subgroups often rely on sensitive information, raising privacy issues. Additionally, gathering such information at runtime might be impractical. This paper introduces a cost-effective strategy that addresses these concerns. We identify underperforming subgroups and train a model to predict if an utterance belongs to these subgroups without needing sensitive information. This model helps mitigate bias by selecting and adding new data, which is labeled as challenging, for re-training the speech model. Experimental results on intent classification and automatic speech recognition tasks show the effectiveness of our approach in reducing biases and enhancing performance, with improvements in reducing error rates of up to 39% for FSC, 16% for ITALIC, and 22% for LibriSpeech.
2025
979-8-89176-288-6
File in questo prodotto:
File Dimensione Formato  
2025.acl-industry.52.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 344.82 kB
Formato Adobe PDF
344.82 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3002215