Markovian Agents for Informative Language Modeling
arxiv(2024)
摘要
Chain-of-Thought (CoT) reasoning could in principle enable a deeper
understanding of a language model's (LM) internal reasoning. However, prior
work suggests that LMs can answer questions similarly despite changes in their
CoT, suggesting that those models are not truly using the CoT. We propose an
reinforcement learning technique to produce CoTs that are sufficient alone for
predicting future text, independent of other context. This methodology ensures
that if the LM can predict future tokens, then it must have used the CoT to
understand its context. We formalize the informativeness of a sender to a
receiver LM as the degree to which the sender helps the receiver predict their
future observations, and we define a "Markovian" LM as one which predicts
future text given only a CoT as context. We derive a "Markovian training"
procedure by applying our definition of informativeness to a Markovian LM and
optimizing via policy gradient and Proximal Policy Optimization (PPO). We
demonstrate our training algorithm's effectiveness on fifteen-term arithmetic
problems, show the model utilizes the CoT, and externally validate that the
generated CoT is meaningful and usable by another model.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要