Context-Aware Block Net for Small Object Detection
Document Type
Article
Publication Title
IEEE Transactions on Cybernetics
Abstract
State-of-the-art object detectors usually progressively downsample the input image until it is represented by small feature maps, which loses the spatial information and compromises the representation of small objects. In this article, we propose a context-aware block net (CAB Net) to improve small object detection by building high-resolution and strong semantic feature maps. To internally enhance the representation capacity of feature maps with high spatial resolution, we delicately design the context-aware block (CAB). CAB exploits pyramidal dilated convolutions to incorporate multilevel contextual information without losing the original resolution of feature maps. Then, we assemble CAB to the end of the truncated backbone network (e.g., VGG16) with a relatively small downsampling factor (e.g., 8) and cast off all following layers. CAB Net can capture both basic visual patterns as well as semantical information of small objects, thus improving the performance of small object detection. Experiments conducted on the benchmark Tsinghua-Tencent 100K and the Airport dataset show that CAB Net outperforms other top-performing detectors by a large margin while keeping real-time speed, which demonstrates the effectiveness of CAB Net for small object detection.
First Page
2300
Last Page
2313
DOI
10.1109/TCYB.2020.3004636
Publication Date
4-1-2022
Keywords
Contextual information, Convolutional neural network, Pyramidal dilated convolutions, Small object detection, Spatial information
Recommended Citation
L. Cui et al., "Context-Aware Block Net for Small Object Detection," in IEEE Transactions on Cybernetics, vol. 52, no. 4, pp. 2300-2313, April 2022, doi: 10.1109/TCYB.2020.3004636.
Comments
IR Deposit conditions:
OA version (pathway a) Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged