Byzantine-Robust Loopless Stochastic Variance-Reduced Gradient

Document Type

Conference Proceeding

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract

Distributed optimization with open collaboration is a popular field since it provides an opportunity for small groups/companies/universities, and individuals to jointly solve huge-scale problems. However, standard optimization algorithms are fragile in such settings due to the possible presence of so-called Byzantine workers – participants that can send (intentionally or not) incorrect information instead of the one prescribed by the protocol (e.g., send anti-gradient instead of stochastic gradients). Thus, the problem of designing distributed methods with provable robustness to Byzantine workers has been receiving a lot of attention recently. In particular, several works consider a very promising way to achieve Byzantine tolerance via exploiting variance reduction and robust aggregation. The existing approaches use SAGA- and SARAH-type variance reduced estimators, while another popular estimator – SVRG – is not studied in the context of Byzantine-robustness. In this work, we close this gap in the literature and propose a new method – Byzantine-Robust Loopless Stochastic Variance Reduced Gradient (BR-LSVRG). We derive non-asymptotic convergence guarantees for the new method in the strongly convex case and compare its performance with existing approaches in numerical experiments.

First Page

39

Last Page

53

DOI

10.1007/978-3-031-35305-5_3

Publication Date

6-26-2023

Keywords

Byzantine-robustness, Distributed optimization, Stochastic optimization, Variance reduction

Comments

IR conditions: non-described

Share

COinS