Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model
CoRR(2024)
摘要
Stepwise inference protocols, such as scratchpads and chain-of-thought, help
language models solve complex problems by decomposing them into a sequence of
simpler subproblems. Despite the significant gain in performance achieved via
these protocols, the underlying mechanisms of stepwise inference have remained
elusive. To address this, we propose to study autoregressive Transformer models
on a synthetic task that embodies the multi-step nature of problems where
stepwise inference is generally most useful. Specifically, we define a graph
navigation problem wherein a model is tasked with traversing a path from a
start to a goal node on the graph. Despite is simplicity, we find we can
empirically reproduce and analyze several phenomena observed at scale: (i) the
stepwise inference reasoning gap, the cause of which we find in the structure
of the training data; (ii) a diversity-accuracy tradeoff in model generations
as sampling temperature varies; (iii) a simplicity bias in the model's output;
and (iv) compositional generalization and a primacy bias with in-context
exemplars. Overall, our work introduces a grounded, synthetic framework for
studying stepwise inference and offers mechanistic hypotheses that can lay the
foundation for a deeper understanding of this phenomenon.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要