Triply Stochastic Gradient Method For Large-Scale Nonlinear Similar Unlabeled Classification

Document Type

Article

Publication Title

Machine Learning

Abstract

Similar unlabeled (SU) classification is pervasive in many real-world applications, where only similar data pairs (two data points have the same label) and unlabeled data points are available to train a classifier. Recent work has identified a practical SU formulation and has derived the corresponding estimation error bound. It evaluated SU learning with linear classifiers on medium-sized datasets. However, in practice, we often need to learn nonlinear classifiers on large-scale datasets for superior predictive performance. How this could be done in an efficient manner is still an open problem for SU classification. In this paper, we propose a scalable kernel learning algorithm for SU classification using a triply stochastic optimization framework, called TSGSU. Specifically, in each iteration, our method randomly samples an instance from the similar pairs set, an instance from the unlabeled set, and their random features to calculate the stochastic functional gradient for the model update. Theoretically, we prove that our method can converge to a stationary point at the rate of O(1/T) after T iterations. Experiments on various benchmark datasets and high-dimensional datasets not only demonstrate the scalability of TSGSU but also show the efficiency of TSGSU compared with existing SU learning algorithms while retaining similar generalization performance.

First Page

2005

Last Page

2033

DOI

10.1007/s10994-021-05980-1

Publication Date

8-1-2021

Keywords

Kernel method, Large-scale optimization, SU classification, Weakly-supervised learning

Comments

IR deposit conditions:

  • OA (accepted version) - pathway b
  • 12 months embargo
  • Published source must be acknowledged
  • Must link to publisher version with DOI

Share

COinS