Title

Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm For All Personalization Modes

Document Type

Article

Publication Title

arXiv

Abstract

This paper considers the problem of decentralized, personalized federated learning. For centralized personalized federated learning, a penalty that measures the deviation from the local model and its average, is often added to the objective function. However, in a decentralized setting this penalty is expensive in terms of communication costs, so here, a different penalty - one that is built to respect the structure of the underlying computational network - is used instead. We present lower bounds on the communication and local computation costs for this problem formulation and we also present provably optimal methods for decentralized personalized federated learning. Numerical experiments are presented to demonstrate the practical performance of our methods. Copyright © 2021, The Authors. All rights reserved.

DOI

10.48550/arXiv.2107.07190

Publication Date

2-4-2022

Keywords

Accelerated algorithm; Bound algorithms; Decentralised; Decentralized optimization; Distributed optimization; Federated learning; Low bound; Lower and upper bounds; Optimal algorithm; Personalizations

Comments

Preprint: arXiv

Share

COinS