Resource Optimized Hierarchical Split Federated Learning for Wireless Networks

Document Type

Conference Proceeding

Publication Title

ACM International Conference Proceeding Series

Abstract

Federated learning (FL) uses distributed fashion of training via local models (e.g., convolutional neural network) computation at devices followed by central aggregation at the edge or cloud. Such distributed training uses a significant amount of computational resources (i.e., CPU-cycles/sec) that seem difficult to be met by Internet of Things (IoT) sensors. Addressing these challenges, split FL (SFL) was recently proposed based on computing a part of a model at devices and remaining at edge/cloud servers. Although SFL resolves devices computing resources constraints, it still suffers from fairness issues and slow convergence. To enable FL with these features, we propose a novel hierarchical SFL (HSFL) architecture that combines SFL with a hierarchical fashion of learning. To avoid a single point of failure and fairness issues, HSFL has a truly distributed nature (i.e., distributed aggregations). We also define a cost function that can be minimized relative local accuracy, transmit power, resource allocation, and association. Due to the non-convex nature, we propose a block successive upper bound minimization (BSUM) based solution. Finally, numerical results are presented.

First Page

254

Last Page

259

DOI

10.1145/3576914.3590148

Publication Date

5-9-2023

Keywords

Federated learning, hierarchical federated learning., Internet of Things, split learning

Comments

IR conditions: non-described

Share

COinS