Cross-modal propagation network for generalized zero-shot learning

Document Type

Article

Publication Title

Pattern Recognition Letters

Abstract

Zero-shot learning (ZSL) aims to recognize unseen classes by transferring semantic knowledge from seen classes to unseen ones. Since only seen classes are available during training, the domain bias issue, i.e., the trained model is biased toward seen classes, is the key issue for ZSL. To alleviate the bias problem, generation-based approaches are proposed to build generative models that can generate fake visual features of unseen classes by utilizing semantic vectors. However, most of the existing generative methods still suffer some degree of domain bias caused by the ambiguous generation of fake features. In this paper, we propose a cross-modal propagation network (CMPN), which adopts an episode-based meta-learning strategy. CMPN incorporates the adaptive graph construction and label propagation into the generative ZSL framework for guaranteeing an unambiguous and discriminative fake feature generating. By further leveraging the manifold structure of different modalities in the latent space, CMPN can implicitly ensure intra-class compactness and inter-class separation through label propagation classification in latent space. Extensive experiments on four datasets validate the effectiveness of CMPN under both ZSL and generalized ZSL (GZSL) settings.

First Page

125

Last Page

131

DOI

10.1016/j.patrec.2022.05.009

Publication Date

7-1-2022

Keywords

Generative adversarial network, Label propagation, Meta-learning, Zero-shot learning

Comments

IR Deposit conditions:

OA version (pathway b) Accepted version

24 month embargo

License: CC BY-NC-ND

Must link to publisher version with DOI

Share

COinS