TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View Radar Semantic Segmentation
Document Type
Conference Proceeding
Publication Title
Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
Abstract
Scene understanding plays an essential role in enabling autonomous driving and maintaining high standards of performance and safety. To address this task, cameras and laser scanners (LiDARs) have been the most commonly used sensors, with radars being less popular. Despite that, radars remain low-cost, information-dense, and fast-sensing techniques that are resistant to adverse weather conditions. While multiple works have been previously presented for radar-based scene semantic segmentation, the nature of the radar data still poses a challenge due to the inherent noise and sparsity, as well as the disproportionate foreground and background. In this work, we propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data through a novel architecture and loss functions that are tailored to tackle the drawbacks of radar perception. Our novel architecture includes an efficient attention block that adaptively captures important feature information. Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA [26] and RADIal [28] datasets while having smaller model sizes. https://github.com/YahiDar/TransRadar
First Page
352
Last Page
361
DOI
10.1109/WACV57701.2024.00042
Publication Date
1-1-2024
Keywords
Algorithms, Image recognition and understanding
Recommended Citation
Y. Dalbah et al., "TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View Radar Semantic Segmentation," Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, pp. 352 - 361, Jan 2024.
The definitive version is available at https://doi.org/10.1109/WACV57701.2024.00042