Federated Learning Stability Under Byzantine Attacks
Document Type
Conference Proceeding
Publication Title
IEEE Wireless Communications and Networking Conference, WCNC
Abstract
Federated Learning (FL) is a machine learning approach that enables private and decentralized model training. Although FL has been shown to be very useful in several applications, its privacy constraints cause a lack of model update transparency which makes it vulnerable to several types of attacks. In particular, based on detailed convergence analyses, we show in this paper that when the traditional model-combining scheme is used, even a single Byzantine node that keeps sending random reports will cause the whole FL model to diverge to non-useful solutions. A low complexity model combining approach is also proposed to stabilize the FL system and make it converge to a suboptimal solution just by controlling the model norm. The Physikalisch-Technische Bundesanstalt extra-large electrocardiogram (PTB-XL ECG) dataset is used to validate the findings of this paper and show the efficiency of the proposed approach in identifying heart anomalies. © 2022 IEEE.
First Page
572
Last Page
577
DOI
10.1109/WCNC51071.2022.9771594
Publication Date
5-16-2022
Keywords
Large dataset, Learning systems, Byzantine attacks, Convergence analysis, Decentralized models, Distributed learning, E health, Federated learning, Learning stability, Machine learning approaches, Model training, Privacy constraints, Electrocardiography
Recommended Citation
A. Gouissem, K. Abualsaud, E. Yaacoub, T. Khattab and M. Guizani, "Federated Learning Stability Under Byzantine Attacks," 2022 IEEE Wireless Communications and Networking Conference (WCNC), Apr 10-13, 2022, pp. 572-577, doi: 10.1109/WCNC51071.2022.9771594.
Comments
IR Deposit conditions:
OA version (pathway a): Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged