GBMIA: Gradient-based Membership Inference Attack in Federated Learning

Document Type

Conference Proceeding

Publication Title

IEEE International Conference on Communications

Abstract

Membership inference attack (MIA) has been proved to pose a serious threat to federated learning (FL). However, most of the existing membership inference attacks against FL rely on the specific attack models built from the target model behaviors, which make the attacks costly and complicated. In addition, directly adopting the inference attacks that are originally designed for machine learning models into the federated scenarios can lead to poor performance. We propose GBMIA, an attack model-free membership inference method based on gradient. We take full advantage of the federated learning process by observing the target model's behaviors after gradient ascent tuning. And we combine prediction correctness and the gradient norm-based metric for membership inference. The proposed GBMIA can be conducted by both global and local attackers. We conduct experimental evaluations on three real-world datasets to demonstrate that GBMIA can achieve a high attack accuracy. We further apply the arbitration mechanism to increase the effectiveness of GBMIA which can lead to an attack accuracy close to 1 on all three datasets. We also conduct experiments to substantiate that clients going offline and the overlap of clients' training sets have great effect on the membership leakage in FL.

First Page

5066

Last Page

5071

DOI

10.1109/ICC45041.2023.10279702

Publication Date

10-23-2023

Keywords

Training, Measurement, Privacy, Differential privacy, Federated learning, Behavioral sciences, Homomorphic encryption

Comments

IR conditions: non-described

Share

COinS