SSAL: Synergizing between Self-Training and Adversarial Learning for Domain Adaptive Object Detection
Advances in Neural Information Processing Systems
We study adapting trained object detectors to unseen domains manifesting significant variations of object appearance, viewpoints and backgrounds. Most current methods align domains by either using image or instance-level feature alignment in an adversarial fashion. This often suffers due to the presence of unwanted background and as such lacks class-specific alignment. A common remedy to promote class-level alignment is to use high confidence predictions on the unlabelled domain as pseudo labels. These high confidence predictions are often fallacious since the model is poorly calibrated under domain shift. In this paper, we propose to leverage model’s predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment. Specifically, we measure predictive uncertainty on class assignments and the bounding box predictions. Model predictions with low uncertainty are used to generate pseudo-labels for self-supervision, whereas the ones with higher uncertainty are used to generate tiles for an adversarial feature alignment stage. This synergy between tiling around the uncertain object regions and generating pseudo-labels from highly certain object regions allows us to capture both the image and instance level context during the model adaptation stage. We perform extensive experiments covering various domain shift scenarios. Our approach improves upon existing state-of-the-art methods with visible margins. © 2021 Neural information processing systems foundation. All rights reserved.
Forecasting, Object detection, Object recognition
M.A. Munir, M.H. Khan, and M.S. Sarfraz, "SSAL: Synergizing between Self-Training and Adversarial Learning for Domain Adaptive Object Detection", in 35th Conference on Neural Information Processing Systems (NeurIPS 2021), in Advances in Neural Information Processing Systems, vol 27, Dec 2021, pp.22770-22782, https://proceedings.neurips.cc/paper/2021/file/c0cccc24dd23ded67404f5e511c342b0-Paper.pdf