Polarity Loss: Improving Visual-Semantic Alignment for Zero-Shot Detection
Document Type
Article
Publication Title
IEEE Transactions on Neural Networks and Learning Systems
Abstract
Conventional object detection models require large amounts of training data. In comparison, humans can recognize previously unseen objects by merely knowing their semantic description. To mimic similar behavior, zero-shot object detection (ZSD) aims to recognize and localize “unseen” object instances by using only their semantic information. The model is first trained to learn the relationships between visual and semantic domains for seen objects, later transferring the acquired knowledge to totally unseen objects. This setting gives rise to the need for correct alignment between visual and semantic concepts so that the unseen objects can be identified using only their semantic attributes. In this article, we propose a novel loss function called “polarity loss” that promotes correct visual-semantic alignment for an improved ZSD. On the one hand, it refines the noisy semantic embeddings via metric learning on a “semantic vocabulary” of related concepts to establish a better synergy between visual and semantic domains. On the other hand, it explicitly maximizes the gap between positive and negative predictions to achieve better discrimination between seen, unseen, and background objects. Our approach is inspired by embodiment theories in cognitive science that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word vocabulary), and visual perception (seen/unseen object images). We conduct extensive evaluations on the Microsoft Common Objects in Context (MS-COCO) and Pascal Visual Object Classes (VOC) datasets, showing significant improvements over state of the art. Our code and evaluation protocols available at: https://github.com/salman-h-khan/PL-ZSD_Release. IEEE
First Page
1
Last Page
13
DOI
10.1109/TNNLS.2022.3184821
Publication Date
6-30-2022
Keywords
Computer vision, Deep neural networks, Function evaluation, Object detection, Object recognition, Zero-shot learning, Loss functions, Noise measurements, Objects detection, Semantic alignments, Visual semantics, Vocabulary, Zero-shot learning, Zero-shot object detection, Semantics
Recommended Citation
S. Rahman, S. Khan and N. Barnes, "Polarity Loss: Improving Visual-Semantic Alignment for Zero-Shot Detection," in IEEE Transactions on Neural Networks and Learning Systems, 2022, doi: 10.1109/TNNLS.2022.3184821.
Comments
IR Deposit conditions:
OA version (pathway a) Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged