Multi-User QoE Enhancement: Federated Multi-Agent Reinforcement Learning for Cooperative Edge Intelligence

Document Type


Publication Title

IEEE Network


Federated learning (FL) as a new decentralized learning/computing technique has potential advantages (e.g., accelerating computation task processing and protecting user privacy) for edge intelligence. However, due to limited computing/caching capacities of network edges and dynamic arrivals of computation tasks, edge intelligence with FL cannot appropriately offload and effectively process computation tasks, which will degrade multi-user quality of experience (QoE). To address these challenges, it is critical to enhance the cooperation of network edges and quantify the multi-user QoE. In this article, we investigate the issue of cooperative edge intelligence by considering federated multi-agent reinforcement learning to enhance the multi-user QoE. Particularly, we present a cooperative edge intelligence architecture with vertical-horizontal cooperation supporting computation offloading. We model a comprehensive system cost to quantify the multi-user QoE and formulate the optimization problem as minimizing the expected long-term system cost. We further propose a decentralized intelligent offloading framework based on soft actor-critic and FL with an attention mechanism. Evaluation results demonstrate that the proposed scheme outperforms existing offloading schemes in terms of convergence and multi-user QoE. Finally, we discuss several open issues and opportunities of edge intelligence with FL.

First Page


Last Page




Publication Date



Multi agent systems, Quality of service, Reinforcement learning


IR Deposit conditions:

OA version (pathway a) Accepted version

No embargo

When accepted for publication, set statement to accompany deposit (see policy)

Must link to publisher version with DOI

Publisher copyright and source must be acknowledged