Towards Robust Partially Supervised Multi-Structure Medical Image Segmentation on Small-Scale Data

Document Type

Article

Publication Title

arXiv

Abstract

The data-driven nature of deep learning (DL) models for semantic segmentation requires a large number of pixel-level annotations. However, large-scale and fully labeled medical datasets are often unavailable for practical tasks. Recently, partially supervised methods have been proposed to utilize images with incomplete labels in the medical domain. To bridge the methodological gaps in partially supervised learning (PSL) under data scarcity, we propose Vicinal Labels Under Uncertainty (VLUU), a simple yet efficient framework utilizing the human structure similarity for partially supervised medical image segmentation. Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels. We systematically evaluate VLUU under the challenges of small-scale data, dataset shift, and class imbalance on two commonly used segmentation datasets for the tasks of chest organ segmentation and optic disc-and-cup segmentation. The experimental results show that VLUU can consistently outperform previous partially supervised models in these settings. Our research suggests a new research direction in label-efficient deep learning with partial supervision. Copyright © 2020, The Authors. All rights reserved.

Publication Date

1-1-2021

Keywords

Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Machine Learning (cs.LG)

Comments

Preprint: arXiv

Share

COinS