Stochastic Gradient Descent with Preconditioned Polyak Step-Size
Document Type
Article
Publication Title
Computational Mathematics and Mathematical Physics
Abstract
Abstract: Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems. These methods display valuable properties and attract researchers and industrial machine learning engineers with their simplicity. However, one of the weaknesses of this type of methods is the necessity to tune learning rate (step-size) for every loss function and dataset combination to solve an optimization problem and get an efficient performance in a given time budget. Stochastic Gradient Descent with Polyak Step-size (SPS) is a method that offers an update rule that alleviates the need of fine-tuning the learning rate of an optimizer. In this paper, we propose an extension of SPS that employs preconditioning techniques, such as Hutchinson’s method, Adam, and AdaGrad, to improve its performance on badly scaled and/or ill-conditioned datasets.
First Page
621
Last Page
634
DOI
10.1134/S0965542524700052
Publication Date
4-1-2024
Keywords
adaptive step-size, machine learning, optimization, Polyak step-size, preconditioning
Recommended Citation
F. Abdukhakimov et al., "Stochastic Gradient Descent with Preconditioned Polyak Step-Size," Computational Mathematics and Mathematical Physics, vol. 64, no. 4, pp. 621 - 634, Apr 2024.
The definitive version is available at https://doi.org/10.1134/S0965542524700052