Limited Memory Online Gradient Descent for Kernelized Pairwise Learning with Dynamic Averaging
Document Type
Conference Proceeding
Publication Title
Proceedings of the AAAI Conference on Artificial Intelligence
Abstract
Pairwise learning, an important domain within machine learning, addresses loss functions defined on pairs of training examples, including those in metric learning and AUC maximization. Acknowledging the quadratic growth in computation complexity accompanying pairwise loss as the sample size grows, researchers have turned to online gradient descent (OGD) methods for enhanced scalability. Recently, an OGD algorithm emerged, employing gradient computation involving prior and most recent examples, a step that effectively reduces algorithmic complexity to O(T), with T being the number of received examples. This approach, however, confines itself to linear models while assuming the independence of example arrivals. We introduce a lightweight OGD algorithm that does not require the independence of examples and generalizes to kernel pairwise learning. Our algorithm builds the gradient based on a random example and a moving average representing the past data, which results in a sub-linear regret bound with a complexity of O(T). Furthermore, through the integration of O(√Tlog T) random Fourier features, the complexity of kernel calculations is effectively minimized. Several experiments with real-world datasets show that the proposed technique outperforms kernel and linear algorithms in offline and online scenarios.
First Page
10821
Last Page
10828
DOI
10.1609/aaai.v38i10.28955
Publication Date
3-25-2024
Recommended Citation
H. AlQuabeh et al., "Limited Memory Online Gradient Descent for Kernelized Pairwise Learning with Dynamic Averaging," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 10, pp. 10821 - 10828, Mar 2024.
The definitive version is available at https://doi.org/10.1609/aaai.v38i10.28955