Computation Offloading and Resource Allocation in NOMA-MEC: A Deep Reinforcement Learning Approach

Document Type


Publication Title

IEEE Internet of Things Journal


Multi-access edge computing (MEC) has emerged as a powerful paradigm for increasing the computation performance of mobile devices (MDs). Applying non-orthogonal multiple access (NOMA) to MEC can further improve the spectrum efficiency and reduce offloading delays caused by the upload congestion. In this paper, we examine the joint computation offloading and resource allocation problem in the NOMA-MEC system, which benefits from the combination of NOMA and MEC. Our optimization objective is to minimize the computational overhead (the weighted sum of the execution delay and the energy consumption) in dynamic environments with time-varying wireless fading channels. The optimization problem is formulated as a mixed integer programming (MIP), which involves jointly optimizing the task offloading decisions, channel assignment, and transmit power allocation. To solve such an optimization problem, we formalize the task offloading and the resource allocation as a Markov decision process (MDP). Then, we propose a deep reinforcement learning (DRL) based approach, which combines multiple deep neural networks (DNNs) to directly approximate different statistical models for continuous and discrete control. The simulation results demonstrate that the proposed approach can rapidly converge and efficiently decrease the total computational overhead compared to other baseline approaches in different scenarios.

First Page


Last Page




Publication Date



computation offloading, Computational modeling, deep reinforcement learning (DRL), Energy consumption, Internet of Things, Mobile edge computing (MEC), NOMA, non-orthogonal multiple access (NOMA), Optimization, resource allocation, Resource management, Task analysis


IR Deposit conditions:

OA version (pathway a) Accepted version

No embargo

When accepted for publication, set statement to accompany deposit (see policy)

Must link to publisher version with DOI

Publisher copyright and source must be acknowledged