3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds

Document Type

Conference Proceeding

Publication Title

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

Abstract

Robust point cloud parsing under all-weather conditions is crucial to level-5 autonomy in autonomous driving. However, how to learn a universal 3D semantic segmentation (3DSS) model is largely neglected as most existing benchmarks are dominated by point clouds captured under normal weather. We introduce SemanticSTF, an adverse-weather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. We study all-weather 3DSS modeling under two setups: 1) domain adaptive 3DSS that adapts from normal-weather data to adverse-weather data; 2) domain generalizable 3DSS that learns all-weather 3DSS models from normal-weather data. Our studies reveal the challenge while existing 3DSS methods encounter adverse-weather data, showing the great value of SemanticSTF in steering the future endeavor along this very meaningful research direction. In addition, we design a domain randomization technique that alternatively randomizes the geometry styles of point clouds and aggregates their embeddings, ultimately leading to a generalizable model that can improve 3DSS under various adverse weather effectively. The SemanticSTF and related codes are available at https://github.com/xiaoaoran/SemanticSTF.

First Page

9382

Last Page

9392

DOI

10.1109/CVPR52729.2023.00905

Publication Date

1-1-2023

Keywords

3D from multi-view and sensors

This document is currently not available here.

Share

COinS