Fair and Robust Federated Learning Under Byzantines

Date of Award

4-30-2024

Document Type

Thesis

Degree Name

Master of Science in Machine Learning

Department

Machine Learning

First Advisor

Dr. Karthik Nandakumar

Second Advisor

Dr. Samuel Horvath

Abstract

Federated Learning (FL) is a useful tool to improve training via collaboration without the direct sharing of data. It leverages model updates from different participants to help create an improved model. However, basic Federated Learning algorithms are not fair or robust, leaving them open to both adversarial attacks and free riders. Current approaches tackle the fairness via rewarding participants based on their contribution either via the data or gradient quality. In the robustness and fairness field, Robust and Fair Federated Learning (RFFL) has shown promising results in both. However, no attacks tackling the reputation mechanism were explored in the literature. This proposes the question of how we can exploit weaknesses of the reputation mechanism to worsen the training. This will help us identify vulnerabilities in the training algorithm so we can further understand it and make further improvements. In our work we propose both an attack that is effective against the reputation system of the RFFL as well as an improved FL algorithm that is more robust. Our attack works by slowly shifting parts of the overall aggregated gradient to poison the training. It aims to keep all participants in the training so the final outcome is worsened. We tested the effectiveness of our attack against multiple attacks in the literature, which resulted in our attack negatively affecting the training. We also showcase that it is easier to attack under a low number of participants, making RFFL sub-optimal in such cases. In addition, we showcase another vulnerability in the RFFL, which is that we can steal reputation from honest participants, leading them to be removed from global training. It can be achieved with simple attacks, which results in the honest participants not benefiting from the FL system. For the modified Federated Learning algorithm, we showcase that adding robust aggregation can guard against the proposed attack that is effective against the RFFL. We also showcase that under no attack and even under a multitude of attacks that the RFFL is robust against, we can achieve better accuracy results while keeping good fairness between the participants. The experiments were done on several datasets, in both IID and non-IID cases, to showcase the effects of our attack and defensive algorithms. The code files for the algorithms are available at https://github.com/MEAZZ0/FairandRobustFederatedLearning.

Comments

Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfilment of the requirements for the M.Sc degree in Machine Learning

Advisors: Karthik Nandakumar, Samuel Horvath

Online access available for MBZUAI patrons

Share

COinS