Collapse of Self-trained Language Models
arxiv(2024)
摘要
In various fields of knowledge creation, including science, new ideas often
build on pre-existing information. In this work, we explore this concept within
the context of language models. Specifically, we explore the potential of
self-training models on their own outputs, akin to how humans learn and build
on their previous thoughts and actions. While this approach is intuitively
appealing, our research reveals its practical limitations. We find that
extended self-training of the GPT-2 model leads to a significant degradation in
performance, resulting in repetitive and collapsed token output.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要