Preconditioning meets biased compression for efficient distributed optimization

Document Type

Article

Publication Title

Computational Management Science

Abstract

Methods with preconditioned updates show up well in badly scaled and/or ill-conditioned convex optimization problems. However, theoretical analysis of these methods in distributed setting is not yet provided. We close this issue by studying preconditioned version of the Error Feedback (EF) method, a popular convergence stabilization mechanism for distributed learning with biased compression. We combine EF and EF21 algorithms with preconditioner based on Hutchinson’s approximation to the diagonal of the Hessian. An experimental comparison of the algorithms with the ResNet computer vision model is provided.

DOI

10.1007/s10287-023-00496-6

Publication Date

12-24-2023

Keywords

Accelerated methods, First-order methods, Non-convex minimization

Comments

IR conditions: non-described

Share

COinS