Document Type
Article
Publication Title
arXiv
Abstract
Previous work on novel object detection considers zero or few-shot settings where none or few examples of each category are available for training. In real world scenarios, it is less practical to expect that ‘all’ the novel classes are either unseen or have few-examples. Here, we propose a more realistic setting termed ‘Any-shot detection’, where totally unseen and few-shot categories can simultaneously co-occur during inference. Any-shot detection offers unique challenges compared to conventional novel object detection such as, a high imbalance between unseen, few-shot and seen object classes, susceptibility to forget base-training while learning novel classes and distinguishing novel classes from the background. To address these challenges, we propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes. Our core idea is to use class semantics as prototypes for object detection, a formulation that naturally minimizes knowledge forgetting and mitigates the class-imbalance in the label space. Besides, we propose a rebalanced loss function that emphasizes difficult few-shot cases but avoids overfitting on the novel classes to allow detection of totally unseen classes. Without bells and whistles, our framework can also be used solely for Zero-shot object detection and Few-shot object detection tasks. We report extensive experiments on Pascal VOC and MS-COCO datasets where our approach is shown to provide significant improvements.
DOI
doi.org/10.48550/arXiv.2003.07003
Publication Date
3-16-2020
Keywords
object detection, Novel object detection, Zero-shot detection, Few-shot detection, Pascal VOC datasets, MS-COCO datasets
Recommended Citation
S. Rahman, S. Khan, N. Barnes and F. Khan, "Any-shot object detection", 2020, arXiv:2003.07003v1
Additional Links
Published version on Springer
Comments
Preprint: arXiv