Document Type

Article

Publication Title

Computational Management Science

Abstract

We consider a decentralized convex unconstrained optimization problem, where the cost function can be decomposed into a sum of strongly convex and smooth functions, associated with individual agents, interacting over a static or time-varying network. Our main concern is the convergence rate of first-order optimization algorithms as a function of the network’s graph, more specifically, of the condition numbers of gossip matrices. We are interested in the case when the network is time-varying but the rate of changes is restricted. We study two cases: randomly changing network satisfying Markov property and a network changing in a deterministic manner. For the random case, we propose a decentralized optimization algorithm with accelerated consensus. For the deterministic scenario, we show that if the graph is changing in a worst-case way, accelerated consensus is not possible even if only two edges are changed at each iteration. The fact that such a low rate of network changes is sufficient to make accelerated consensus impossible is novel and improves the previous results in the literature.

DOI

10.1007/s10287-023-00489-5

Publication Date

6-2024

Keywords

Consensus, Convergence rate, Convex optimization, Decentralized optimization, Time-varying network

Comments

Preprint version from arXiv

Uploaded on June 21, 2024

Share

COinS