Securing Federated Learning against FGSM Attacks with Adaptive Trust Scores and Blockchain Updates

Document Type

Conference Proceeding

Publication Title

2023 5th International Conference on Blockchain Computing and Applications, BCCA 2023

Abstract

Federated learning (FL) is a distributed Machine Learning (ML) approach that allows multiple clients to train a global model collaboratively while maintaining their data decentralized. However, this learning paradigm is prone to various security threats, including Fast Gradient Sign Method (FGSM) attacks. This paper proposes a novel method for enhancing the security and privacy of FL by reducing the risk of FGSM attacks through a Trust Score-based (TS) approach and blockchain technology. The TS is determined based on the client's average loss and variance, and only clients with a high TS are allowed to participate in the model averaging process. This method significantly improves the security and accuracy of the global model, preventing security breaches in FL while maintaining the privacy of client data and model parameters. The effectiveness of the proposed method is evidenced by the experimental results obtained from two distinct datasets. Compared to the attack without the proposed method, the proposed method improved the global accuracy by an average of 10% while reducing the success rate of FGSM attacks by an average of 87%. The incorporation of blockchain technology enables the creation of a decentralized network, which enhances the privacy and security of the FL process.

First Page

194

Last Page

199

DOI

10.1109/BCCA58897.2023.10338861

Publication Date

12-2023

Keywords

Training, Privacy, Data privacy, Federated learning, Computational modeling, Scalability, Data models

Comments

IR conditions: non-described

Share

COinS