Graph Neural Network-based Multi-agent Reinforcement Learning for Resilient Distributed Coordination of Multi-Robot Systems
CoRR(2024)
摘要
Existing multi-agent coordination techniques are often fragile and vulnerable
to anomalies such as agent attrition and communication disturbances, which are
quite common in the real-world deployment of systems like field robotics. To
better prepare these systems for the real world, we present a graph neural
network (GNN)-based multi-agent reinforcement learning (MARL) method for
resilient distributed coordination of a multi-robot system. Our method,
Multi-Agent Graph Embedding-based Coordination (MAGEC), is trained using
multi-agent proximal policy optimization (PPO) and enables distributed
coordination around global objectives under agent attrition, partial
observability, and limited or disturbed communications. We use a multi-robot
patrolling scenario to demonstrate our MAGEC method in a ROS 2-based simulator
and then compare its performance with prior coordination approaches. Results
demonstrate that MAGEC outperforms existing methods in several experiments
involving agent attrition and communication disturbance, and provides
competitive results in scenarios without such anomalies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要