SeqDQN: Multi-Agent Deep Reinforcement Learning for Uplink URLLC with Strict Deadlines.

EuCNC/6G Summit(2023)

引用 0|浏览4
暂无评分
摘要
Recent studies suggest that Multi-Agent Reinforcement Learning (MARL) can be a promising approach to tackle wireless telecommunication problems and Multiple Access (MA) in particular. The most relevant MARL algorithms for distributed MA are those with "decentralized execution", where an agent's actions are only functions of their own local observation history and agents cannot exchange any information. Centralized-Training-Decentralized-Execution (CTDE) and Independent Learning (IL) are the two main families in this category. However, while the former suffers from high communication overhead during the centralized training, the latter suffers from various theoretical shortcomings. In this paper, we first study the performance of these two MARL frameworks in the context of Ultra Reliable Low Latency Communication (URLLC), where MA is constrained by strict deadlines. Second, we propose a new distributed MARL framework, namely SeqDQN, leveraging the constraints of our URLLC problem to train agents in a more efficient way. We demonstrate that not only does our solution outperform the traditional random access baselines, but it also outperforms state-of-the-art MARL algorithms in terms of performance and convergence time.
更多
查看译文
关键词
Distributed Multiple Access,Deep Multi-Agent Reinforcement Learning,Internet of Things,Wireless sensor networks,URLLC
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要