Towards more interpretable graphs and Knowledge Graph algorithms

dc.contributor.advisorLópez de Ipiña González de Artaza, Diegoes_ES
dc.contributor.authorZulaika Zurimendi, Unaies_ES
dc.contributor.otherFacultad de Ingenieríaes_ES
dc.contributor.otherPrograma de Doctorado en Ingeniería para la Sociedad de la Información y Desarrollo Sostenible por la Universidad de Deustoes_ES
dc.date.accessioned2024-02-21T12:33:04Z
dc.date.available2024-02-21T12:33:04Z
dc.date.issued2022-12-13
dc.description.abstractThe increase in the amount of data generated by today’s technologies has led to the creation of large graphs and Knowledge Graphs that contain millions of facts about people, things and places in the world. Grounded on those large data stores, many Machine Learning models have been proposed to achieve different tasks, such as predicting new links or weights. Nevertheless, one of the main challenges of those models is their lack of interpretability. Commonly known as “black boxes”, Machine Learning models are usually not understandable to humans. This lack of interpretability becomes even a more severe problem for Knowledge graph-related applications, including healthcare systems, chatbots, or public service management tools where end-users require an understanding of the feedback given by the models. In this thesis, we present methods to increase the interpretability of graphs and Knowledge Graphs based Machine Learning models. We follow a taxonomy grounded on the output result obtained by the proposed methods. Each of the different methods is suitable for particular use cases and scenarios, and can help end-users in different manners. Precisely, we provide an interpretable link weight prediction method based on the Weisfeiler-Lehman graph colouring technique. Additionally, we present an adaption of the Regularized Dual Averaging optimization method for Knowledge Graphs to obtain interpretable representations in link prediction models. Lastly, we introduce the use of Influence Functions for Knowledge Graph link prediction models to acquire the most im- important training facts for a given prediction. Through experiments in link weight prediction and link prediction, we show that our methods can successfully increase the interpretability of the machine learning models of graphs and Knowledge Graphs while maintaining competition with state-of-the-art methods in terms of performance.es_ES
dc.identifier.urihttp://hdl.handle.net/20.500.14454/1287
dc.language.isoenges_ES
dc.publisherUniversidad de Deustoes_ES
dc.subjectMatemáticases_ES
dc.subjectCiencia de los ordenadoreses_ES
dc.titleTowards more interpretable graphs and Knowledge Graph algorithmses_ES
dc.typedoctoral thesises_ES
Ficheros en el ítem
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1708518451_434922.pdf
Tamaño:
3.37 MB
Formato:
Adobe Portable Document Format
Colecciones