A Multi-Agent Reinforcement Learning Approach for Massive Access in NOMA-URLLC Networks
Document Type
Article
Publication Title
IEEE Transactions on Vehicular Technology
Abstract
Ultra-reliable low-latency communication (URLLC) enables diverse applications with rigorous latency and reliability requirements. To provide a wide range of services, the future beyond fifth (B5G) systems are expected to support a large number of URLLC users. In this paper, we propose a joint sub-channel allocation and power control method to support massive access for non-orthogonal multiple access aided URLLC (NOMA-URLLC) networks. We model the problem of maximizing the number of successful access users as a multi-agent reinforcement learning problem. A deep Q-network-based multi-agent reinforcement learning (DQN-MARL) algorithm is proposed to tackle the problem while guaranteeing reliability and latency requirements of URLLC services. Simulation results show that the proposed DQN-MARL algorithm significantly improves the successful access probability in massive access scenarios compared with the existing schemes.
First Page
16799
Last Page
16804
DOI
10.1109/TVT.2023.3292423
Publication Date
12-1-2023
Keywords
Electronic mail, Massive access, multi-agent reinforcement learning, NOMA, NOMA, Power control, Reinforcement learning, Resource management, Ultra reliable low latency communication, Uplink, URLLC
Recommended Citation
H. Han et al., "A Multi-Agent Reinforcement Learning Approach for Massive Access in NOMA-URLLC Networks," in IEEE Transactions on Vehicular Technology, vol. 72, no. 12, pp. 16799-16804, Dec. 2023, doi: 10.1109/TVT.2023.3292423
Additional Links
DOI Link: https://doi.org/10.1109/TVT.2023.3292423
Comments
IR Deposit conditions:
OA version (pathway a) Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged