Document Type

Conference Proceeding

Publication Title

Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023

Abstract

Recently, some mixture algorithms of pointwise and pairwise learning (PPL) have been formulated by employing the hybrid error metric of “pointwise loss + pairwise loss” and have shown empirical effectiveness on feature selection, ranking and recommendation tasks. However, to the best of our knowledge, the learning theory foundation of PPL has not been touched in the existing works. In this paper, we try to fill this theoretical gap by investigating the generalization properties of PPL. After extending the definitions of algorithmic stability to the PPL setting, we establish the high-probability generalization bounds for uniformly stable PPL algorithms. Moreover, explicit convergence rates of stochastic gradient descent (SGD) and regularized risk minimization (RRM) for PPL are stated by developing the stability analysis technique of pairwise learning. In addition, the refined generalization bounds of PPL are obtained by replacing uniform stability with on-average stability.

First Page

10113

Last Page

10121

DOI

10.1609/aaai.v37i8.26205

Publication Date

6-26-2023

Keywords

Artificial intelligence, Gradient methods, Learning systems, Stability, Stochastic systems

Comments

Copyright by AAAI, employing Open Source Publishing system

Archived thanks to AAAI

Uploaded 15 Jan 2024

Share

COinS