Towards Robust Partially Supervised Multi-Structure Medical Image Segmentation on Small-Scale Data
The data-driven nature of deep learning (DL) models for semantic segmentation requires a large number of pixel-level annotations. However, large-scale and fully labeled medical datasets are often unavailable for practical tasks. Recently, partially supervised methods have been proposed to utilize images with incomplete labels in the medical domain. To bridge the methodological gaps in partially supervised learning (PSL) under data scarcity, we propose Vicinal Labels Under Uncertainty (VLUU), a simple yet efficient framework utilizing the human structure similarity for partially supervised medical image segmentation. Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels. We systematically evaluate VLUU under the challenges of small-scale data, dataset shift, and class imbalance on two commonly used segmentation datasets for the tasks of chest organ segmentation and optic disc-and-cup segmentation. The experimental results show that VLUU can consistently outperform previous partially supervised models in these settings. Our research suggests a new research direction in label-efficient deep learning with partial supervision. Copyright © 2020, The Authors. All rights reserved.
Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Machine Learning (cs.LG)
N. Dong, M. Kampffmeyer, X. Liang, M. Xu, I. Voiculescu, and E. P. Xing, "Towards robust partially supervised multi-structure medical image segmentation on small-scale data," 2020, arXiv:2011.14164