Part of the Artificial Intelligence and Robotics Commons
2024
A Damped Newton Method Achieves Global O(1/K2) and Local Quadratic Convergence Rate, Slavomír Hanzely, Dmitry Kamzolov, Dmitry Pasechnyuk, Alexander Gasnikov, Peter Richtárik, Martin Takáč
Martin Takac
2022
Exploiting higher-order derivatives in convex optimization methods, Dmitry Kamzolov, Alexander Gasnikov, Pavel Dvurechensky, Artem Agafonov, Martin Takac
Martin Takac
Stochastic Gradient Methods with Preconditioned Updates, Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takac
Martin Takac
Stochastic Gradient Methods with Preconditioned Updates, Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takac
Machine Learning Faculty Publications
Recent Theoretical Advances in Non-Convex Optimization, Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, Innokentiy Shibaev
Machine Learning Faculty Publications
2020