Document Type

Article

Publication Title

arXiv

Abstract

Exploiting higher-order derivatives in convex optimization is known at least since 1970’s. In each iteration higher-order (also called tensor) methods minimize a regularized Taylor expansion of the objective function, which leads to faster convergence rates if the corresponding higher-order derivative is Lipschitz-continuous. Recently a series of lower iteration complexity bounds for such methods were proved, and a gap between upper an lower complexity bounds was revealed. Moreover, it was shown that such methods can be implementable since the appropriately regularized Taylor expansion of a convex function is also convex and, thus, can be minimized in polynomial time. Only very recently an algorithm with optimal convergence rate 1/k(3p+1)/2) was proposed for minimizing convex functions with Lipschitz p-th derivative. For convex functions with Lipschitz third derivative, these developments allowed to propose a second-order method with convergence rate 1/k5, which is faster than the rate 1/k3.5 of existing second-order methods. © 2022, CC BY.

DOI

10.48550/arXiv.2208.13190

Publication Date

8-28-2022

Keywords

Optimization and Control (math.OC)

Comments

Preprint: arXiv

Archived with thanks to arXiv

Preprint License: CC by 4.0

Uploaded 22 September 2022

Share

COinS