Resource Optimized Hierarchical Split Federated Learning for Wireless Networks
ACM International Conference Proceeding Series
Federated learning (FL) uses distributed fashion of training via local models (e.g., convolutional neural network) computation at devices followed by central aggregation at the edge or cloud. Such distributed training uses a significant amount of computational resources (i.e., CPU-cycles/sec) that seem difficult to be met by Internet of Things (IoT) sensors. Addressing these challenges, split FL (SFL) was recently proposed based on computing a part of a model at devices and remaining at edge/cloud servers. Although SFL resolves devices computing resources constraints, it still suffers from fairness issues and slow convergence. To enable FL with these features, we propose a novel hierarchical SFL (HSFL) architecture that combines SFL with a hierarchical fashion of learning. To avoid a single point of failure and fairness issues, HSFL has a truly distributed nature (i.e., distributed aggregations). We also define a cost function that can be minimized relative local accuracy, transmit power, resource allocation, and association. Due to the non-convex nature, we propose a block successive upper bound minimization (BSUM) based solution. Finally, numerical results are presented.
Federated learning, hierarchical federated learning., Internet of Things, split learning
L. U. Khan, M. Guizani, and C.S. Hong, "Resource Optimized Hierarchical Split Federated Learning for Wireless Networks", In Proceedings of Cyber-Physical Systems and Internet of Things Week 2023 (CPS-IoT Week '23), ACL, pp. 254–259, May 2023. doi:10.1145/3576914.3590148