Document Type
Article
Publication Title
arXiv
Abstract
We propose a novel few-shot action recognition framework, STRM, which enhances class-specific feature discriminability while simultaneously learning higher-order temporal representations. The focus of our approach is a novel spatio-temporal enrichment module that aggregates spatial and temporal contexts with dedicated local patch-level and global frame-level feature enrichment sub-modules. Local patch-level enrichment captures the appearance-based characteristics of actions. On the other hand, global framelevel enrichment explicitly encodes the broad temporal context, thereby capturing the relevant object features over time. The resulting spatio-temporally enriched representations are then utilized to learn the relational matching between query and support action sub-sequences. We further introduce a query-class similarity classifier on the patchlevel enriched features to enhance class-specific feature discriminability by reinforcing the feature learning at different stages in the proposed framework. Experiments are performed on four few-shot action recognition benchmarks: Kinetics, SSv2, HMDB51 and UCF101. Our extensive ablation study reveals the benefits of the proposed contributions. Furthermore, our approach sets a new state-of-the-art on all four benchmarks. On the challenging SSv2 benchmark, our approach achieves an absolute gain of 3:5% in classification accuracy, as compared to the best existing method in the literature. Our code and models will be publicly released. © 2021, CC BY-NC-ND.
DOI
doi.org/10.48550/arXiv.2112.05132
Publication Date
12-9-2021
Keywords
Action recognition; Class specific features; Discriminability; Enrichment modules; High-order; Higher-order; Relation models; Spatio-temporal; Spatio-temporal relations; Temporal representations; Computer Vision and Pattern Recognition (cs.CV)
Recommended Citation
A. Thatipelli, S. Narayan, SH. Khan, R.M. Anwer, F.S. Khan, and B. Ghanem, "Spatio-temporal relation modeling for few-shot action recognition", 2021. arXiv:2112.05132
Comments
Preprint: arXiv
Archived with thanks to arXiv
Preprint License: CC BY-NC-ND 4.0
Uploaded 25 March 2022