Decentralized Personalized Federated Min-Max Problems

Document Type

Article

Publication Title

arXiv

Abstract

Personalized Federated Learning (PFL) has recently seen tremendous progress, allowing the design of novel machine learning applications to preserve the privacy of the training data. Existing theoretical results in this field mainly focus on distributed optimization for minimization problems. This paper is the first to study PFL for saddle point problems (which cover a broader class of optimization problems), allowing for a more rich class of applications requiring more than just solving minimization problems. In this work, we consider a recently proposed PFL setting with the mixing objective function, an approach combining the learning of a global model together with locally distributed learners. Unlike most previous work, which considered only the centralized setting, we work in a more general and decentralized setup that allows us to design and analyze more practical and federated ways to connect devices to the network. We proposed new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly-)convex-(strongly-)concave saddle point problems in stochastic and deterministic cases. Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods. Copyright © 2021, The Authors. All rights reserved.

DOI

10.48550/arXiv.2106.07289

Publication Date

2-1-2022

Keywords

Distributed, Parallel, and Cluster Computing; Machine Learning; Optimization and Control

Comments

Preprint: arXiv

Share

COinS