Are AI systems biased against the poor?: a machine learning analysis using Word2Vec and GloVe embeddings

dc.contributor.authorCurto Rex, Georgina
dc.contributor.authorJojoa Acosta, Mario Fernando
dc.contributor.authorComim, Flavio
dc.contributor.authorGarcía-Zapirain, Begoña
dc.date.accessioned2025-05-09T12:03:55Z
dc.date.available2025-05-09T12:03:55Z
dc.date.issued2024-04
dc.date.updated2025-05-09T12:03:55Z
dc.description.abstractAmong the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the issue and a tailor-made model from which meaningful data are obtained using Natural Language Processing word vectors in pretrained Google Word2Vec, Twitter and Wikipedia GloVe word embeddings. The results of the study offer the first set of data that evidences the existence of bias against the poor and suggest that Google Word2vec shows a higher degree of bias when the terms are related to beliefs, whereas bias is higher in Twitter GloVe when the terms express behaviour. This article contributes to the body of work on bias, both from and AI and a social sciences perspective, by providing evidence of a transversal aggravating factor for historical types of discrimination. The evidence of bias against the poor also has important consequences in terms of human development, since it often leads to discrimination, which constitutes an obstacle for the effectiveness of poverty reduction policiesen
dc.description.sponsorshipThe research leading to these results received partial funding support from the Aristos Campus Mundus Project, promoted by the Universities of Ramon Llull, Deusto and Comillas, with the aim to foster excellence in academicsen
dc.identifier.citationCurto, G., Jojoa Acosta, M. F., Comim, F., & Garcia-Zapirain, B. (2024). Are AI systems biased against the poor?: a machine learning analysis using Word2Vec and GloVe embeddings. AI and Society, 39(2), 617-632. https://doi.org/10.1007/S00146-022-01494-Z
dc.identifier.doi10.1007/S00146-022-01494-Z
dc.identifier.eissn1435-5655
dc.identifier.issn0951-5666
dc.identifier.urihttp://hdl.handle.net/20.500.14454/2706
dc.language.isoeng
dc.publisherSpringer Science and Business Media Deutschland GmbH
dc.rights© The Author(s) 2022, corrected publication 2022
dc.subject.otherArtificial intelligence
dc.subject.otherBias
dc.subject.otherEmbeddings
dc.subject.otherPoverty
dc.titleAre AI systems biased against the poor?: a machine learning analysis using Word2Vec and GloVe embeddingsen
dc.typejournal article
dcterms.accessRightsopen access
oaire.citation.endPage632
oaire.citation.issue2
oaire.citation.startPage617
oaire.citation.titleAI and Society
oaire.citation.volume39
oaire.licenseConditionhttps://creativecommons.org/licenses/by/4.0/
oaire.versionCVoR
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
curto_areAI_2024.pdf
Tamaño:
1.21 MB
Formato:
Adobe Portable Document Format
Colecciones