Information-Theoretic State Variable Selection for Reinforcement Learning
CoRR(2024)
摘要
Identifying the most suitable variables to represent the state is a
fundamental challenge in Reinforcement Learning (RL). These variables must
efficiently capture the information necessary for making optimal decisions. In
order to address this problem, in this paper, we introduce the Transfer Entropy
Redundancy Criterion (TERC), an information-theoretic criterion, which
determines if there is entropy transferred from state variables to
actions during training. We define an algorithm based on TERC that provably
excludes variables from the state that have no effect on the final performance
of the agent, resulting in more sample efficient learning. Experimental results
show that this speed-up is present across three different algorithm classes
(represented by tabular Q-learning, Actor-Critic, and Proximal Policy
Optimization (PPO)) in a variety of environments. Furthermore, to highlight the
differences between the proposed methodology and the current state-of-the-art
feature selection approaches, we present a series of controlled experiments on
synthetic data, before generalizing to real-world decision-making tasks. We
also introduce a representation of the problem that compactly captures the
transfer of information from state variables to actions as Bayesian networks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要