ESLM: Improving entity summarization by leveraging language models
Entity summarizers for knowledge graphs are crucial in various applications. Achieving high performance on the task of entity summarization is hence critical for many applications based on knowledge graphs. The currently best performing approaches integrate knowledge graphs with text embeddings to encode entity-related triples. However, these approaches still rely on static word embeddings that cannot cover multiple contexts. We hypothesize that incorporating contextual language models into entity summarizers can further improve their performance. We hence propose ESLM (Entity Summarization using Language Models), an approach for enhancing the performance of entity summarization that integrates contextual language models along with knowledge graph embeddings. We evaluate our models on the datasets DBpedia and LinkedMDB from ESBM version 1.2, and on the FACES dataset. In our experiments, ESLM achieves an F-measure of up to 0.591 and outperforms state-of-the-art approaches in four out of six experimental settings with respect to the F-measure. In addition, ESLM outperforms state-of-the-art models in all experimental settings when evaluated using the NDCG metric. Moreover, contextual language models notably enhance the performance of our entity summarization model, especially when combined with knowledge graph embeddings. We observed a notable boost in our model’s efficiency on DBpedia and FACES. Our approach and the code to rerun our experiments are available at https://github.com/dice-group/ESLM.