Achieving Fast Environment Adaptation of DRL-Based Computation Offloading in Mobile Edge Computing

Document Type

Article

Publication Title

IEEE Transactions on Mobile Computing

Abstract

One of the key issues in mobile edge computing (MEC) is computation offloading, most policies of which are developed based on mathematical programming (MP). Due to the high computational complexity of iterative programming in MP-based policies, recent years have seen a popular trend to develop offloading policies based on deep reinforcement learning (DRL). However, on account of the poor generalization ability of DRL models in MEC environments with different network sizes and settings, it is difficult to directly apply DRL-based offloading policies in unseen MEC environments. Motivated by this, we propose a DRL-based environment-adaptive offloading framework (DEAT), including a size-adaptive scheme (SIED) and setting-adaptive component (SEAL). SIED leverages the idea of ‘time division multiplexing’ to adapt to varying MEC network sizes and order-unaware feature extraction to mitigate impacts of different size-changing orders. SEAL adopts system dynamics embedding and offloading policy embedding, which guide the finding of the closest pre-training MEC environment and offloading policy, respectively, to achieve fast setting-adaptation with only few exploring interactions in unseen MEC environments. Extensive experiments are conducted via both simulation and testbed to demonstrate the adaptation performance advantages of DEAT in unseen MEC environments compared to the state-of-the-art offloading approaches.

First Page

6347

Last Page

6362

DOI

10.1109/TMC.2023.3320253

Publication Date

5-2024

Keywords

Task analysis, Resource management, Feature extraction, Costs, Seals, Decoding, Computational modeling

Comments

IR conditions: non-described

Share

COinS