Source-Aware Training Enables Knowledge Attribution in Language Models
CoRR(2024)
摘要
Large language models (LLMs) learn a vast amount of knowledge during
pretraining, but they are often oblivious to the source(s) of such knowledge.
We investigate the problem of intrinsic source citation, where LLMs are
required to cite the pretraining source supporting a generated response.
Intrinsic source citation can enhance LLM transparency, interpretability, and
verifiability. To give LLMs such ability, we explore source-aware training – a
post pretraining recipe that involves (i) training the LLM to associate unique
source document identifiers with the knowledge in each document, followed by
(ii) an instruction-tuning to teach the LLM to cite a supporting pretraining
source when prompted. Source-aware training can easily be applied to pretrained
LLMs off the shelf, and diverges minimally from existing
pretraining/fine-tuning frameworks. Through experiments on carefully curated
data, we demonstrate that our training recipe can enable faithful attribution
to the pretraining data without a substantial impact on the model's quality
compared to standard pretraining. Our results also highlight the importance of
data augmentation in achieving attribution.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要