Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers
Document Type
Conference Proceeding
Publication Title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Abstract
Vision transformers have recently shown strong global context modeling capabilities in camouflaged object detection. However, they suffer from two major limitations: less effective locality modeling and insufficient feature aggregation in decoders, which are not conducive to camou-flaged object detection that explores subtle cues from indistinguishable backgrounds. To address these issues, in this paper, we propose a novel transformer-based Feature Shrinkage Pyramid Network (FSPNet), which aims to hierarchically decode locality-enhanced neighboring transformer features through progressive shrinking for camou-flaged object detection. Specifically, we propose a non-local token enhancement module (NL-TEM) that employs the non-local mechanism to interact neighboring tokens and explore graph-based high-order relations within tokens to enhance local representations of transformers. Moreover, we design a feature shrinkage decoder (FSD) with adjacent interaction modules (AIM), which progressively aggregates adjacent transformer features through a layer-by-layer shrinkage pyramid to accumulate imperceptible but effective cues as much as possible for object information decoding. Extensive quantitative and qualitative experiments demonstrate that the proposed model significantly outperforms the existing 24 competitors on three challenging COD benchmark datasets under six widely-used evaluation metrics. Our code is publicly available at https://github.com/ZhouHuang23/FSPNet.
First Page
5557
Last Page
5566
DOI
10.1109/CVPR52729.2023.00538
Publication Date
1-1-2023
Keywords
grouping and shape analysis, Segmentation
Recommended Citation
Z. Huang et al., "Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers," Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2023-June, pp. 5557 - 5566, Jan 2023.
The definitive version is available at https://doi.org/10.1109/CVPR52729.2023.00538