Learning Complementary Spatial–Temporal Transformer for Video Salient Object Detection
Document Type
Article
Publication Title
IEEE Transactions on Neural Networks and Learning Systems
Abstract
Besides combining appearance and motion information, another crucial factor for video salient object detection (VSOD) is to mine spatial–temporal (ST) knowledge, including complementary long–short temporal cues and global–local spatial context from neighboring frames. However, the existing methods only explored part of them and ignored their complementarity. In this article, we propose a novel complementary ST transformer (CoSTFormer) for VSOD, which has a short-global branch and a long-local branch to aggregate complementary ST contexts. The former integrates the global context from the neighboring two frames using dense pairwise attention, while the latter is designed to fuse long-term temporal information from more consecutive frames with local attention windows. In this way, we decompose the ST context into a short-global part and a long-local part and leverage the powerful transformer to model the context relationship and learn their complementarity. To solve the contradiction between local window attention and object motion, we propose a novel flow-guided window attention (FGWA) mechanism to align the attention windows with object and camera movements. Furthermore, we deploy CoSTFormer on fused appearance and motion features, thus enabling the effective combination of all three VSOD factors. Besides, we present a pseudo video generation method to synthesize sufficient video clips from static images for training ST saliency models. Extensive experiments have verified the effectiveness of our method and illustrated that we achieve new state-of-the-art results on several benchmark datasets.
First Page
1
Last Page
11
DOI
10.1109/TNNLS.2023.3243246
Publication Date
2-16-2023
Keywords
Aggregates, Attention models, Computational modeling, Context modeling, Feature extraction, Object detection, optical flow, Optical flow, saliency detection, transformer, Transformers, video salient object detection (VSOD)
Recommended Citation
N. Liu, K. Nan, W. Zhao, X. Yao and J. Han, "Learning Complementary Spatial–Temporal Transformer for Video Salient Object Detection," in IEEE Transactions on Neural Networks and Learning Systems,, pp.1-11, February 2023, doi: 10.1109/TNNLS.2023.3243246.
Comments
IR Deposit conditions:
OA version (pathway a) Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged