In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in personnel recruitment and candidate profiles analysis. However, using large language models introduces the risk of perpetuating and exacerbating existing gender stereotypes in the labour market. This research aims to evaluate this phenomenon, analysing how a state-of-the-art generative model (GPT-5) suggests occupations and represents ideal candidates based on their gender, focusing on under 35 years old Italian graduates. The study consists of two complementary experiments. In the Candidate-driven experiment, the model is prompted to provide job suggestions for 24 synthetic candidate profiles, balanced by gender, age, experience, and professional field. Results show that, although no significant differences emerged in job titles, gendered linguistic patterns exist in the adjectives attributed to female and male candidates, indicating a tendency of the model to associate women with emotional and empathetic traits, while men with strategic and analytical ones. The Job-driven experiment employed 114 LinkedIn job advertisements as prompts to generate textual and visual representations of ideal candidates. The analysis of the outputs revealed a clear gender polarisation: the model assigned 71% of profiles to male and 29% to female gender. The strongest association emerged in HR & People Operations occupations, assigned exclusively to female candidates, and Operations, Technical & Manufacturing jobs, assigned exclusively to male candidates. Visual analysis confirms the perpetuation of gender stereotypes, depicting women in more approachable postures and men in assertive roles. These results suggest that, in the recruitment domain and under the experimental settings of this study, GenAI models do not simply reflect the gender biases of the training data, but also amplify them. The research raises an ethical question regarding the use of these models in HR decision support, highlighting the need for transparency and bias mitigation strategies to ensure fairness and inclusive representation.
Gender bias and propagation of stereotypes in GenAI-assisted recruitment / Ullasci, Martina; Rondina, Marco; Coppola, Riccardo; Vetro', Antonio. - (In corso di stampa). ( ACM International Conference on the Foundations of Software Engineering (FSE) - 2nd Intersectionality and Software Engineering Workshop Montreal (CA) 05-09/07/2026).
Gender bias and propagation of stereotypes in GenAI-assisted recruitment
martina ullasci;marco rondina;riccardo coppola;antonio vetro'
In corso di stampa
Abstract
In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in personnel recruitment and candidate profiles analysis. However, using large language models introduces the risk of perpetuating and exacerbating existing gender stereotypes in the labour market. This research aims to evaluate this phenomenon, analysing how a state-of-the-art generative model (GPT-5) suggests occupations and represents ideal candidates based on their gender, focusing on under 35 years old Italian graduates. The study consists of two complementary experiments. In the Candidate-driven experiment, the model is prompted to provide job suggestions for 24 synthetic candidate profiles, balanced by gender, age, experience, and professional field. Results show that, although no significant differences emerged in job titles, gendered linguistic patterns exist in the adjectives attributed to female and male candidates, indicating a tendency of the model to associate women with emotional and empathetic traits, while men with strategic and analytical ones. The Job-driven experiment employed 114 LinkedIn job advertisements as prompts to generate textual and visual representations of ideal candidates. The analysis of the outputs revealed a clear gender polarisation: the model assigned 71% of profiles to male and 29% to female gender. The strongest association emerged in HR & People Operations occupations, assigned exclusively to female candidates, and Operations, Technical & Manufacturing jobs, assigned exclusively to male candidates. Visual analysis confirms the perpetuation of gender stereotypes, depicting women in more approachable postures and men in assertive roles. These results suggest that, in the recruitment domain and under the experimental settings of this study, GenAI models do not simply reflect the gender biases of the training data, but also amplify them. The research raises an ethical question regarding the use of these models in HR decision support, highlighting the need for transparency and bias mitigation strategies to ensure fairness and inclusive representation.| File | Dimensione | Formato | |
|---|---|---|---|
|
Gender_bias_x_FSE doi.pdf
accesso aperto
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Creative commons
Dimensione
6.43 MB
Formato
Adobe PDF
|
6.43 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3009913
