Multi-UAV-Assisted Federated Learning for Energy-Aware Distributed Edge Training

Document Type


Publication Title

IEEE Transactions on Network and Service Management


Unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) has largely extended the border and capacity of artificial intelligence of things (AIoT) by providing a key element for enabling flexible distributed data inputs, computing capacity, and high mobility. To enhance data privacy for AIoT applications, federated learning (FL) is becoming a potential solution to perform training tasks locally on distributed IoT devices. However, with the limited onboard resources and battery capacity of each UAV node, optimization is required to achieve a large-scale and high-precision FL scheme. In this work, an optimized multi-UAV-assisted FL framework is designed, where regular IoT devices are in charge of performing training tasks, and multiple UAVs are leveraged to execute local and global aggregation tasks. An online resource allocation (ORA) algorithm is proposed to minimize the training latency by jointly deciding the selection decisions of clients and a global aggregation server. By leveraging the Lyapunov optimization technique, virtual energy queues are studied to depict the energy deficit. With the help of the actor-critic learning framework, a deep reinforcement learning (DRL) scheme is designed to improve per-round training performance. A deep neural network (DNN)-based actor module is designed to derive client selection decisions, and a critic module is proposed through a conventional optimization method to evaluate the obtained selection decisions. Moreover, a greedy scheme is developed to find the optimal global aggregation server. Finally, extensive simulation results demonstrate that the proposed ORA algorithm can achieve optimal training latency and energy consumption under various system settings.



Publication Date



Autonomous aerial vehicles, client selection, DRL, federated learning, Internet of Things, Performance evaluation, resource allocation, Resource management, Servers, Task analysis, Training, UAV


IR Deposit conditions:

OA version (pathway a) Accepted version

No embargo

When accepted for publication, set statement to accompany deposit (see policy)

Must link to publisher version with DOI

Publisher copyright and source must be acknowledged