Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Document Type
Conference Proceeding
Publication Title
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022
Abstract
Collaboration among multiple data-owning entities (e.g., hospitals) can accelerate the training process and yield better machine learning models due to the availability and diversity of data. However, privacy concerns make it challenging to exchange data while preserving confidentiality. Federated Learning (FL) is a promising solution that enables collaborative training through exchange of model parameters instead of raw data. However, most existing FL solutions work under the assumption that participating clients are honest and thus can fail against poisoning attacks from malicious parties, whose goal is to deteriorate the global model performance. In this work, we propose a robust aggregation rule called Distance-based Outlier Suppression (DOS) that is resilient to byzantine failures. The proposed method computes the distance between local parameter updates of different clients and obtains an outlier score for each client using Copula-based Outlier Detection (COPOD). The resulting outlier scores are converted into normalized weights using a softmax function, and a weighted average of the local parameters is used for updating the global model. DOS aggregation can effectively suppress parameter updates from malicious clients without the need for any hyperparameter selection, even when the data distributions are heterogeneous. Evaluation on two medical imaging datasets (CheXpert and HAM10000) demonstrates the higher robustness of DOS method against a variety of poisoning attacks in comparison to other state-of-the-art methods. The code can be found at https://github.com/Naiftt/SPAFD. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
First Page
673
Last Page
683
DOI
10.1007/978-3-031-16452-1_64
Publication Date
9-16-2022
Keywords
Federated learning, Malicious clients, Outlier suppression, Parameter aggregation, Distribution functions, Learning systems, Machine learning, Statistics
Recommended Citation
N. Alkhunaizi, D. Kamzolov, M. Takac, and K. Nandakumar, "Suppressing Poisoning Attacks on Federated Learning for Medical Imaging", Medical Image Computing and Computer Assisted Intervention (MICCAI 2022), Lecture Notes in Computer Science, vol 13438, pp. 673-683, September 2022, doi:10.1007/978-3-031-16452-1_64
Comments
IR Deposit conditions: non-described