As Artificial Intelligence (AI) becomes ubiquitous, the need for Explainable AI (XAI) has become critical for transparency and trust among users. A significant challenge in XAI is catering to diverse users, such as data scientists, domain experts, and end-users. Recent research has started to investigate how users' characteristics impact interactions with and user experience of explanations, with a view to personalizing XAI. However, are we heading down a rabbit hole by focusing on unimportant details? Our research aimed to investigate how user characteristics are related to using, understanding, and trusting an AI system that provides explanations. Our empirical study with 149 participants who interacted with an XAI system that flagged inappropriate comments showed that very few user characteristics mattered; only age and the personality trait openness influenced actual understanding. Our work provides evidence to reorient user-focused XAI research and question the pursuit of personalized XAI based on fine-grained user characteristics.

User Characteristics in Explainable AI: The Rabbit Hole of Personalization? / Nimmo, Robert; Constantinides, Marios; Zhou, Ke; Quercia, Daniele; Stumpf, Simone. - (2024), pp. 1-13. (Intervento presentato al convegno CHI '24: CHI Conference on Human Factors in Computing Systems tenutosi a Honolulu HI (USA) nel 2024) [10.1145/3613904.3642352].

User Characteristics in Explainable AI: The Rabbit Hole of Personalization?

Quercia, Daniele;
2024

Abstract

As Artificial Intelligence (AI) becomes ubiquitous, the need for Explainable AI (XAI) has become critical for transparency and trust among users. A significant challenge in XAI is catering to diverse users, such as data scientists, domain experts, and end-users. Recent research has started to investigate how users' characteristics impact interactions with and user experience of explanations, with a view to personalizing XAI. However, are we heading down a rabbit hole by focusing on unimportant details? Our research aimed to investigate how user characteristics are related to using, understanding, and trusting an AI system that provides explanations. Our empirical study with 149 participants who interacted with an XAI system that flagged inappropriate comments showed that very few user characteristics mattered; only age and the personality trait openness influenced actual understanding. Our work provides evidence to reorient user-focused XAI research and question the pursuit of personalized XAI based on fine-grained user characteristics.
2024
979-8-4007-0330-0
File in questo prodotto:
File Dimensione Formato  
user24_chi.pdf

accesso riservato

Tipologia: 1. Preprint / submitted version [pre- review]
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.24 MB
Formato Adobe PDF
1.24 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
3613904.3642352.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.52 MB
Formato Adobe PDF
1.52 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2996084