Physics-Regulated Deep Reinforcement Learning: Invariant Embeddings

ICLR 2024(2024)

引用 0|浏览3
暂无评分
摘要
This paper proposes the Phy-DRL: a physics-regulated deep reinforcement learning (DRL) framework for safety-critical autonomous systems. The designs of Phy-DRL are based on three invariant-embedding principles: i) residual action policy (i.e., integrating data-driven-DRL action policy and physics-model-based action policy), ii) safety-embedded reward, and iii) physics-model-guided neural network (NN) editing, including link editing and activation editing. Theoretically, the Phy-DRL exhibits 1) mathematically-provable safety guarantee, and 2) strict compliance of critic and actor networks with physics knowledge about the action-value function and action policy. Finally, we evaluate the Phy-DRL on a cart-pole system and a quadruped robot. The experiments validate our theoretical results and demonstrate that Phy-DRL features guaranteed safety compared to purely data-driven DRL and solely model-based design, while offering remarkably fewer learning parameters, and fast and stable training.
更多
查看译文
关键词
Physics-informed deep reinforcement learning,Safety-critical autonomous systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要