Title

Personalized learning with the existence of harmful workers

Document Type

Dissertation

Abstract

With the increasing demand for powerful machine-learning models, data is becoming a valuable resource. However, centralized servers can only host a limited amount. Thus, added to privacy concerns, Federated Learning (FL) is currently the go-to framework to train large and data-hungry models without compromising privacy or the accuracy of the model. The success of FL is tightly linked with particular properties of the distributed data, namely homogeneity. The involved workers, each holding their own share of the data, are assumed -optimistically- to provide unbiased gradients of the objective function that will contribute to the convergence of the optimization method to a minimum. However, in many cases, this assumption is violated, either unintentionally because the data in each worker is inherently different. Or intentionally, in what is referred to in the literature as byzantine behavior. Therefore, it is no longer wise to trust the updates sent by the workers blindly and hope for the best, especially in large-scale applications where the involved workers are not necessarily known entities and might not be liable for misuse. To this end, we propose a corrupt update filtering mechanism that adopts adaptive aggregation weights on the received update to represent their historical usefulness throughout the training. We consider α (the mixing weights vector) a parameter to be optimized as well. We leverage optimization techniques (gradient descent, derivative-free optimization) to update the α before applying it to update the original parameters x. We get a comparable convergence rate to standard minibatch SGD in the smooth and strongly convex case O( σ2m2 T|H|3 ), where m is the number of workers, σ is the variance of the stochastic gradients, T is the iteration counter, and |H| is the cardinality of the helpful workers’ set. Additionally, we empirically prove the method’s merit through a series of experiments involving imagined byzantine behaviors that the method successfully overcomes.

Publication Date

6-2023

Comments

Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfillment of the requirements for the M.Sc degree in Machine Learning

Advisors: Dr. Martin Takac, Dr. Bin Gu

Online access available for MBZUAI patrons

Share

COinS