Document Type
Article
Publication Title
arXiv
Abstract
Person search is a challenging problem with various real-world applications, that aims at joint person detection and re-identification of a query person from uncropped gallery images. Although, previous study focuses on rich feature information learning, it’s still hard to retrieve the query person due to the occurrence of appearance deformations and background distractors. In this paper, we propose a novel attention-aware relation mixer (ARM) module for person search, which exploits the global relation between different local regions within RoI of a person and make it robust against various appearance deformations and occlusion. The proposed ARM is composed of a relation mixer block and a spatio-channel attention layer. The relation mixer block introduces a spatially attended spatial mixing and a channel-wise attended channel mixing for effectively capturing discriminative relation features within an RoI. These discriminative relation features are further enriched by introducing a spatio-channel attention where the foreground and background discriminability is empowered in a joint spatio-channel space. Our ARM module is generic and it does not rely on fine-grained supervisions or topological assumptions, hence being easily integrated into any Faster R-CNN based person search methods. Comprehensive experiments are performed on two challenging benchmark datasets: CUHK-SYSU and PRW. Our PS-ARM achieves state-of-the-art performance on both datasets. On the challenging PRW dataset, our PS-ARM achieves an absolute gain of 5% in the mAP score over SeqNet, while operating at a comparable speed. The source code and pre-trained models are available at (this https URL). © 2022, CC BY.
DOI
10.48550/arXiv.2210.03433
Publication Date
10-7-2022
Keywords
channel attention, Person Search, Spatial attention, Transformer
Recommended Citation
M. Fiaz, H. Cholakkal, S. Narayan, R. M. Anwer, and F.S. Khan, "PS-ARM: An End-to-End Attention-aware Relation Mixer Network for Person Search, 2022, doi: 10.48550/arXiv.2210.03433
Comments
Preprint: arXiv
Archived with thanks to arXiv
Preprint License: CC by 4.0
Uploaded 31 October 2022