Quasi-Newton Methods for Federated Learning with Error Feedback

Date of Award

4-30-2024

Document Type

Thesis

Degree Name

Master of Science in Machine Learning

Department

Machine Learning

First Advisor

Dr. Martin Takac

Second Advisor

Dr. Samuel Horvath

Abstract

"Federated learning (FL) facilitates collaborative model training across distributed devices without sharing raw data, preserving privacy of each participant. While in FL setting communication cost is fairly high compared with the computation cost, compression aims to address such bottleneck, promising improved efficiency and scalability. The integration of error feedback mechanisms is essential to ensure convergence when employing compression. In this paper, we introduce novel Quasi-Newton methods tailored for federated learning, integrating them with the error feedback framework, with a particular emphasis on the EF21 mechanism. EF21 offers a comprehensive theoretical understanding and demonstrates superior practical performance, effectively overcoming previous limitations associated with heavy reliance on strong assumptions and increased communication costs. Leveraging the efficiency of Quasi-Newton methods, especially the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, our proposed EF21+LBFGS method achieves a convergence rate of O( 1/T ) in nonconvex regimes and exhibits a linear convergence rate under the Polyak-Lojasiewicz condition. Through theoretical analysis and empirical evaluations, we demonstrate the efficacy of our approach, showcasing accelerated convergence rates and improved model performance compared to existing methods. Our findings suggest promising prospects for enhancing the effectiveness and scalability of federated learning in practical settings."

Comments

Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfilment of the requirements for the M.Sc degree in Machine Learning

Advisors: Dr. Martin Takac, Samuel Horvath

Online access available for MBZUAI patrons

Share

COinS