Out-of-distribution detection in open-world machine learning: new algorithms, learning scenarios, and pathways towards safe artificial intelligence

Cargando...
Miniatura
Fecha
2024-12-16
Título de la revista
ISSN de la revista
Título del volumen
Editor
Universidad de Deusto
Resumen
As we witness the rapid ascent of artificial intelligence, concerns regarding the safety of AI models have significantly heightened. It encompasses AI alignment and robustness and resilience, the former being the ability to withstand unknown scenarios and the latter the ability to adapt to such new scenarios. All these attributes are becoming essential pillars for future models, which need to be trustworthy to gain acceptance in industries and among the general public. In fact, it is increasingly likely that models will not only need to be inherently trustworthy but may also soon be legally required to demonstrate this characteristic. Despite the critical importance of safety, the current research landscape shows a notable deficiency in studies focused on it. However, since 2016, the Out-of-Distribution detection paradigm has been developed to enhance model robustness. Since its inception, this framework has gradually captured the attention of the scientific community for its applicability in real-world settings. Today, there is a substantial body of work in this field, with even official benchmarks established for testing various proposed OoD detectors. Yet, the literature on OoD detection predominantly focuses on a specific task-supervised image classification-often utilizing image datasets that are not particularly complex. This Thesis aims to investigate robustness through the expansion of the OoD detection framework. Specifically, it explores other types of models such as Spiking Neural Networks, different tasks like Object Detection, and alternative learning paradigms such as Reinforcement Learning. Through this research, we seek to broaden the scope of the OoD detection framework beyond its traditional applications, whether by exploring uncharted approaches or by tackling tasks that are closer to practical real-world applications. This dissertation demonstrates that the OoD detection framework is a highly useful paradigm for Open World Machine Learning, which brings us closer to implementing models capable of learning, recognizing their knowledge gaps, consolidating new knowledge from these gaps, and integrating this knowledge into the model.
Palabras clave
Descripción
Materias
Matemáticas
Ciencia de los ordenadores
Inteligencia artificial
Cita
Colecciones