The impact of AI errors in a human-in-the-loop process

Cargando...
Miniatura
Fecha
2024-01-07
Título de la revista
ISSN de la revista
Título del volumen
Editor
Springer Science and Business Media Deutschland GmbH
google-scholar
Resumen
Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered
Palabras clave
AI
Artificial intelligence
Automation bias
Compliance
Decision-making
Human-in-the-loop
Human–computer interaction
Descripción
Materias
Cita
Agudo, U., Liberal, K. G., Arrese, M., & Matute, H. (2024). The impact of AI errors in a human-in-the-loop process. Cognitive Research: Principles and Implications, 9(1). https://doi.org/10.1186/S41235-023-00529-3
Colecciones