Document Type
Article
Publication Title
arXiv
Abstract
Multi-label zero-shot learning (ZSL) is a more realistic counter-part of standard single-label ZSL since several objects can co-exist in a natural image. However, the occurrence of multiple objects complicates the reasoning and requires region-specific processing of visual features to preserve their contextual cues. We note that the best existing multi-label ZSL method takes a shared approach towards attending to region features with a common set of attention maps for all the classes. Such shared maps lead to diffused attention, which does not discriminatively focus on relevant locations when the number of classes are large. Moreover, mapping spatially-pooled visual features to the class semantics leads to inter-class feature entanglement, thus hampering the classification. Here, we propose an alternate approach towards region-based discriminabilitypreserving multi-label zero-shot classification. Our approach maintains the spatial resolution to preserve regionlevel characteristics and utilizes a bi-level attention module (BiAM) to enrich the features by incorporating both region and scene context information. The enriched region-level features are then mapped to the class semantics and only their class predictions are spatially pooled to obtain imagelevel predictions, thereby keeping the multi-class features disentangled. Our approach sets a new state of the art on two large-scale multi-label zero-shot benchmarks: NUSWIDE and Open Images. On NUS-WIDE, our approach achieves an absolute gain of 6.9% mAP for ZSL, compared to the best published results. Source code is available at https://github.com/akshitac8/BiAM. © 2021, CC BY-NC-SA.
DOI
doi.org/10.48550/arXiv.2108.09301
Publication Date
8-20-2021
Keywords
Computer Vision and Pattern Recognition (cs.CV)
Recommended Citation
S. Narayan, A. Gupta, S. Khan, F.S. Khan, L. Shao, and M. Shah, "Discriminative region-based multi-label zero-shot learning", 2021, arXiv:2108.09301
Comments
Preprint: arXiv
Archived with thanks to arXiv
Preprint License: CC BY-NC-SA 4.0
Uploaded 24 March 2022