Uncertainty-Driven Exploration for Generalization in Reinforcement Learning

ICLR 2023(2023)

引用 0|浏览89
暂无评分
摘要
Value-based methods tend to outperform policy optimization methods when trained and tested in single environments; however, they significantly underperform when trained on multiple environments with similar characteristics and tested on new ones from the same distribution. We investigate the potential reasons behind the poor generalization performance of value-based methods and discover that exploration plays a crucial role in these settings. Exploration is helpful not only for finding optimal solutions to the training environments, but also for acquiring knowledge that helps generalization to other unseen environments. We show how to make value-based methods competitive with policy optimization methods in these settings by using uncertainty-driven exploration and distribtutional RL. Our algorithm is the first value-based method to achieve state-of-the-art on both Procgen and Crafter, two challenging benchmarks for generalization in RL.
更多
查看译文
关键词
Deep reinforcement learning,exploration,generalization,procgen,crafter
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要