Document Type

Article

Publication Title

EURO Journal on Computational Optimization

Abstract

This paper considers the problem of decentralized, personalized federated learning. For centralized personalized federated learning, a penalty that measures the deviation from the local model and its average, is often added to the objective function. However, in a decentralized setting this penalty is expensive in terms of communication costs, so here, a different penalty — one that is built to respect the structure of the underlying computational network — is used instead. We present lower bounds on the communication and local computation costs for this problem formulation and we also present provably optimal methods for decentralized personalized federated learning. Numerical experiments are presented to demonstrate the practical performance of our methods. © 2022 The Authors

DOI

10.1016/j.ejco.2022.100041

Publication Date

9-13-2022

Keywords

Accelerated algorithms, Decentralized optimization, Distributed optimization, Federated learning, Lower and upper bounds

Comments

Archived with thanks to Elsevier ScienceDirect

License: CC BY-NC-ND 4.0

Uploaded 01 February 2023

Share

COinS