Part of the Computer Sciences Commons
2024
A Damped Newton Method Achieves Global O(1/K2) and Local Quadratic Convergence Rate, Slavomír Hanzely, Dmitry Kamzolov, Dmitry Pasechnyuk, Alexander Gasnikov, Peter Richtárik, Martin Takáč
Martin Takac
2022
Exploiting higher-order derivatives in convex optimization methods, Dmitry Kamzolov, Alexander Gasnikov, Pavel Dvurechensky, Artem Agafonov, Martin Takac
Martin Takac
Stochastic Gradient Methods with Preconditioned Updates, Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takac
Martin Takac
Stochastic Gradient Methods with Preconditioned Updates, Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takac
Machine Learning Faculty Publications
Recent Theoretical Advances in Non-Convex Optimization, Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, Innokentiy Shibaev
Machine Learning Faculty Publications
The Power Of First-Order Smooth Optimization for Black-Box Non-Smooth Problems, Alexander V. Gasnikov., Anton Novitskii, Vasilii Novitskii, Farshed Abdukhakimov, Dmitry Kamzolov, Aleksandr Beznosikov, Martin Takáč, Pavel Dvurechensky, Bin Gu
Bin Gu
The Power Of First-Order Smooth Optimization for Black-Box Non-Smooth Problems, Alexander V. Gasnikov., Anton Novitskii, Vasilii Novitskii, Farshed Abdukhakimov, Dmitry Kamzolov, Aleksandr Beznosikov, Martin Takáč, Pavel Dvurechensky, Bin Gu
Martin Takac
The Power Of First-Order Smooth Optimization for Black-Box Non-Smooth Problems, Alexander V. Gasnikov., Anton Novitskii, Vasilii Novitskii, Farshed Abdukhakimov, Dmitry Kamzolov, Aleksandr Beznosikov, Martin Takáč, Pavel Dvurechensky, Bin Gu
Machine Learning Faculty Publications
2020