Contextualization Distillation from Large Language Model for Knowledge Graph Completion
Conference of the European Chapter of the Association for Computational Linguistics(2024)
摘要
While textual information significantly enhances the performance of
pre-trained language models (PLMs) in knowledge graph completion (KGC), the
static and noisy nature of existing corpora collected from Wikipedia articles
or synsets definitions often limits the potential of PLM-based KGC models. To
surmount these challenges, we introduce the Contextualization Distillation
strategy, a versatile plug-in-and-play approach compatible with both
discriminative and generative KGC frameworks. Our method begins by instructing
large language models (LLMs) to transform compact, structural triplets into
context-rich segments. Subsequently, we introduce two tailored auxiliary tasks,
reconstruction and contextualization, allowing smaller KGC models to assimilate
insights from these enriched triplets. Comprehensive evaluations across diverse
datasets and KGC techniques highlight the efficacy and adaptability of our
approach, revealing consistent performance enhancements irrespective of
underlying pipelines or architectures. Moreover, our analysis makes our method
more explainable and provides insight into generating path selection, as well
as the choosing of suitable distillation tasks. All the code and data in this
work will be released at
https://github.com/David-Li0406/Contextulization-Distillation
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要