Robust multi-agent reinforcement learning via Bayesian distributional value estimation

PATTERN RECOGNITION(2024)

引用 0|浏览87
暂无评分
摘要
Reinforcement learning in multi-agent scenarios is essential for real-world applications as it can vividly depict agents' collaborative and competitive behaviors from a perspective closer to reality. However, most existing studies suffer from poor robustness, preventing multi-agent reinforcement learning from practical applications where robustness is the core indicator of system security and stability. In view of this, we propose a novel Bayesian Multi-Agent Reinforcement Learning method, named BMARL, which leverages the distributional value function calculated by Bayesian inference to improve the robustness of the model. Specifically, Bayesian linear regression is adopted to estimate a posterior distribution concerning value function parameters, rather than approximating an expectation value for Q-value by point estimation. In this way, the value function is more generalized than previously obtained by point estimation, which is beneficial to the robustness of our model. Meanwhile, we utilize the Gaussian prior knowledge to integrate more prior knowledge while estimating the value function, which improves learning efficiency. Extensive experimental results on three benchmark multi -agent environments comparing with seven state-of-the-art methods demonstrate the superiority of BMARL in terms of both robustness and efficiency.
更多
查看译文
关键词
Multi-agent reinforcement learning,Bayesian inference,Distributional value function,Deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要