Semi-Supervised Domain Generalization in Visual Recognition
Deep learning (DL) models are highly effective at performing specific tasks such as image classification, objection detection, image segmentation etc on a given data (domain) where the distributions of training and testing data are same. However, these models often struggle with generalization when it comes to new (unseen) domains, where this assumption is violated. Among others, domain generalization (DG) is a practical setting that aims to improve the model’s ability to generalize across multiple domains without leveraging any data from the unseen domain during training. Formally, DG aims to enhance the generalization ability of deep learning models trained on multiple or a single source domain(s), so that they can perform well on unseen target domains. In the recent past, we have witnessed several promising DG approaches that perform well on challenging benchmarks. However, almost all of the existing DG research tackles supervised settings i.e. the source domains data is fully labeled. In real world scenarios, this is a strict requirement, because it is costly to acquire large amounts of labeled data. Also, there is an abundance of unlabeled data which should be leveraged to improve the DG performance. To this end, we study the problem of Semi-Supervised Domain Generalization (SSDG). In SSDG, there is a small amount of source labeled data available and the goal is to leverage a relatively large amount of unlabeled source data to learn a generalizable model capable of performing well under unseen target domains. Towards studying SSDG problem, we make following core contributions in this thesis. First, we investigate the performance of a confidence-based pseudo-labeling (PL) baseline (CPL). Second, we develop a new uncertainty-guided PL approach for SSDG, termed as UPL, that leverages the predictive uncertainty of the model to carefully selects informative PLs and strives avoiding noisy PLs. Third, we leverage a simple yet effective model averaging (MA) technique at inference time to improve the SSDG performance. Finally, we integrate both uncertainty-guided PL approach and model-averaging technique to develop a new SSDG method, dubbed as UPLM, which outperforms many existing baselines and established methods. On the PACS dataset, our UPLM reports an accuracy of 78.07%, outperforming the baseline model (73.51%). On OfficeHome, our approach achieved an accuracy of 50.61%, which is higher than the baseline model (48.38%). On the VLCS dataset, our approach achieved an accuracy of 62.72%, which is significantly higher than the baseline model (43.32%). We hope that our contributions will instigate more semi-supervised domain generalization research in the community.
A. Khan, "Semi-Supervised Domain Generalization in Visual Recognition", M.S. Thesis, Computer Vision, MBZUAI, Abu Dhabi, UAE, 2023.