Examinando por Autor "Ser Lorente, Javier del"
Mostrando 1 - 2 de 2
Resultados por página
Opciones de ordenación
Ítem Managing the unknown in machine learning: definitions, related areas, recent advances, and prospects(Elsevier B.V., 2024-06-14) Barcina Blanco, Marcos; López Lobo, Jesús; García Bringas, Pablo; Ser Lorente, Javier delIn the rapidly evolving domain of machine learning, the ability to adapt to unforeseen circumstances and novel data types is of paramount importance. The deployment of Artificial Intelligence is progressively aimed at more realistic and open scenarios where data, tasks, and conditions are variable and not fully predetermined, and therefore where a closed set assumption cannot be hold. In such evolving environments, machine learning is asked to be autonomous, continuous, and adaptive, requiring effective management of uncertainty and the unknown to fulfill expectations. In response, there is a vigorous effort to develop a new generation of models, which are characterized by enhanced autonomy and a broad capacity to generalize, enabling them to perform effectively across a wide range of tasks. The field of machine learning in open set environments poses many challenges and also brings together different paradigms, some traditional but others emerging, where the overlapping and confusion between them makes it difficult to distinguish them or give them the necessary relevance. This work delves into the frontiers of methodologies that thrive in these open set environments, by identifying common practices, limitations, and connections between the paradigms Open-Ended Learning, Open-World Learning, Open Set Recognition, and other related areas such as Continual Learning, Out-of-Distribution detection, Novelty Detection, and Active Learning. We seek to easy the understanding of these fields and their common roots, uncover open problems and suggest several research directions that may motivate and articulate future efforts towards more robust and autonomous systems.Ítem On the black-box explainability of object detection models for safe and trustworthy industrial applications(Elsevier B.V., 2024-12) Andrés Fernández, Alain; Martínez Seras, Aitor; Laña Aurrecoechea, Ibai; Ser Lorente, Javier delIn the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods aim to address this issue, but many existing techniques are model-specific and designed for classification tasks, making them less effective for object detection and difficult for non-specialists to interpret. In this work we focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based masks to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness and localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number of masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage object detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace where safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due to the potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges the performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used.