diff History for Neural Language Agents
CoRR(2023)
摘要
Neural Language Models (LMs) offer an exciting solution for general-purpose
embodied control. However, a key technical issue arises when using an LM-based
controller: environment observations must be converted to text, which coupled
with history, results in long and verbose textual prompts. As a result, prior
work in LM agents is limited to restricted domains with small observation size
as well as minimal needs for interaction history or instruction tuning. In this
paper, we introduce diff history, a simple and highly effective solution to
these issues. By applying the Unix diff command on consecutive text
observations in the interaction histories used to prompt LM policies, we can
both abstract away redundant information and focus the content of textual
inputs on the salient changes in the environment. On NetHack, an unsolved video
game that requires long-horizon reasoning for decision-making, LMs tuned with
diff history match state-of-the-art performance for neural agents while needing
1800x fewer training examples compared to prior work. Even on the simpler
BabyAI-Text environment with concise text observations, we find that although
diff history increases the length of prompts, the representation it provides
offers a 25
Further, we show that diff history scales favorably across different tuning
dataset sizes. We open-source our code and data to
https://diffhistory.github.io.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要