Date of Award

4-30-2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Machine Learning

Department

Machine Learning

First Advisor

Dr. Bin Gu

Second Advisor

Dr. Martin Takac

Abstract

The primary contribution of this thesis is to analyze several new extensions of the Iterative Hard-Thresholding (IHT) algorithm. We first focus on analyzing a zeroth-order extension of IHT, zeroth-order hard-thresholding (ZOHT): in particular, we analyze the conflict between the error of the zeroth-order gradient estimator, and the expansivity of the hard-thresholding operator. We prove global convergence guarantees in the restricted strongly convex (RSC) and restricted smooth (RSS) setting, for such algorithm. We then analyze the convergence of variance-reduction variants of the ZOHT algorithm (in the RSC and RSS settings), and analyze how the conditions on the number of random directions are improved. We then propose a variant of the original proof of convergence of ZOHT in the non-convex and discontinuous setting, useful for instance in a reinforcement learning setting. Then, we analyze a generalization of IHT which can tackle additional convex constraints verifying mild assumptions, in the zeroth-order and first-order (stochastic and deterministic) settings: when doing so, we also revisits previous proofs of convergence in risk for IHT, providing simpler proofs for existing results, removing the original system error present in the first proof of ZOHT, and extending the convergence result of all those algorithms to the case with extra constraints. Finally, we analyze an algorithm for sparse recovery, IRKSN (Iterative Regularization with k-Support Norm), inspired by a dual perspective on IHT, and show that its provides different conditions for recovery than IHT and usual sparse recovery algorithms based on the ℓ1 norm, therefore providing a useful complement to those algorithms for sparse recovery.

Comments

Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfilment of the requirements for the PhD degree in Machine Learning

Advisors: Dr. Bin Gu, Dr. Martin Takac

Online access available

Copyright by the author

Share

COinS