Semi-Asynchronous Model Design for Federated Learning in Mobile Edge Networks

Document Type

Article

Publication Title

IEEE Transactions on Vehicular Technology

Abstract

Federated learning (FL) is a distributed machine learning (ML). Distributed clients train locally and exclusively need to upload the model parameters to learn the global model collaboratively under the coordination of the aggregation server. Although the privacy of the clients is protected, which requires multiple rounds of data upload between the clients and the server to ensure the accuracy of the global model. Inevitably, this results in latency and energy consumption issues due to limited communication resources. Therefore, mobile edge computing (MEC) has been proposed to solve communication delays and energy consumption in federated learning. In this paper, we first analyze how to select the gradient values that help the global model converge quickly and establish theoretical analysis about the relationship between the convergence rate and the gradient direction. To efficiently reduce the energy consumption of clients during training, on the premise of ensuring the local training accuracy and the convergence rate of the global model, we adopt the deep deterministic policy gradient (DDPG) algorithm, which adaptively allocates resources according to different clients' requests to minimize the energy consumption. To improve flexibility and scalability, we propose a new semi-asynchronous federated update model, which allows clients to aggregate asynchronously on the server, and accelerates the convergence rate of the global model. Empirical results show that the proposed Algorithm 1 not only accelerates the convergence speed of the global model, but also reduces the size of parameters that need to be uploaded. Besides, the proposed Algorithm 2 reduces the time difference caused by users heterogeneity. Eventually, semi-asynchronous update model is better than synchronous update model in communication time.

First Page

16280

Last Page

16292

DOI

10.1109/TVT.2023.3298787

Publication Date

7-25-2023

Keywords

Adaptation models, Convergence, deep deterministic policy gradient, Delays, Energy consumption, energy efficient, Federated learning, mobile edge networks, Optimization, semi-asynchronous update model, Servers, Training

Comments

IR Deposit conditions:

OA version (pathway a) Accepted version

No embargo

When accepted for publication, set statement to accompany deposit (see policy)

Must link to publisher version with DOI

Publisher copyright and source must be acknowledged

Share

COinS