Document Type

Conference Proceeding

Publication Title

Proceedings of Machine Learning Research

Abstract

Algorithms for min-max optimization and variational inequalities are often studied under monotonicity assumptions. Motivated by non-monotone machine learning applications, we follow the line of works (Diakonikolas et al., 2021; Lee & Kim, 2021; Pethick et al., 2022; Böhm, 2022) aiming at going beyond monotonicity by considering the weaker negative comonotonicity assumption. In this work, we provide tight complexity analyses for the Proximal Point (PP), Extragradient (EG), and Optimistic Gradient (OG) methods in this setup, closing several questions on their working guarantees beyond monotonicity. In particular, we derive the first non-asymptotic convergence rates for PP under negative comonotonicity and star-negative comonotonicity and show their tightness via constructing worst-case examples; we also relax the assumptions for the last-iterate convergence guarantees for EG and OG and prove the tightness of the existing best-iterate guarantees for EG and OG via constructing counter-examples.

First Page

11614

Last Page

11641

Publication Date

7-2023

Keywords

Comonotonicity; Complexity analysis; Extragradient; Gradient's methods; Machine learning applications; Min-max optimization; Monotonicity; Optimistics; Proximal Points; Variational inequalities

Comments

Open Access version from PMLR

Uploaded on May 29, 2024

Share

COinS