Learning Interpretable Policies in Hindsight-Observable POMDPs through Partially Supervised Reinforcement Learning
CoRR(2024)
摘要
Deep reinforcement learning has demonstrated remarkable achievements across
diverse domains such as video games, robotic control, autonomous driving, and
drug discovery. Common methodologies in partially-observable domains largely
lean on end-to-end learning from high-dimensional observations, such as images,
without explicitly reasoning about true state. We suggest an alternative
direction, introducing the Partially Supervised Reinforcement Learning (PSRL)
framework. At the heart of PSRL is the fusion of both supervised and
unsupervised learning. The approach leverages a state estimator to distill
supervised semantic state information from high-dimensional observations which
are often fully observable at training time. This yields more interpretable
policies that compose state predictions with control. In parallel, it captures
an unsupervised latent representation. These two-the semantic state and the
latent state-are then fused and utilized as inputs to a policy network. This
juxtaposition offers practitioners a flexible and dynamic spectrum: from
emphasizing supervised state information to integrating richer, latent
insights. Extensive experimental results indicate that by merging these dual
representations, PSRL offers a potent balance, enhancing model interpretability
while preserving, and often significantly outperforming, the performance
benchmarks set by traditional methods in terms of reward and convergence speed.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要