Zulaika Zurimendi, UnaiAlmeida, AitorLópez de Ipiña González de Artaza, Diego2024-11-072024-11-072023Zulaika, U., Almeida, A., & López-de-Ipiña, D. (2023). Regularized online tensor factorization for sparse knowledge graph embeddings. Neural Computing and Applications, 35(1), 787-797. https://doi.org/10.1007/S00521-022-07796-Z1433-305810.1007/S00521-022-07796-Zhttp://hdl.handle.net/20.500.14454/1687Knowledge Graphs represent real-world facts and are used in several applications; however, they are often incomplete and have many missing facts. Link prediction is the task of completing these missing facts from existing ones. Embedding models based on Tensor Factorization attain state-of-the-art results in link prediction. Nevertheless, the embeddings they produce can not be easily interpreted. Inspired by previous work on word embeddings, we propose inducing sparsity in the bilinear tensor factorization model, RESCAL, to build interpretable Knowledge Graph embeddings. To overcome the difficulties that stochastic gradient descent has when producing sparse solutions, we add l1 regularization to the learning objective by using the generalized Regularized Dual Averaging online optimization algorithm. The proposed method substantially improves the interpretability of the learned embeddings while maintaining competitive performance in the standard metrics.eng© The Author(s) 2022Interpretable embeddingsKnowledge graph embeddingSparse learningRegularized online tensor factorization for sparse knowledge graph embeddingsjournal article2024-11-07