Background/foreground separation: guided attention based adversarial modeling (GAAM) versus robust subspace learning methods
Document Type
Conference Proceeding
Publication Title
Proceedings of the IEEE International Conference on Computer Vision
Abstract
Background-Foreground separation and appearance generation is a fundamental step in many computer vision applications. Existing methods like Robust Subspace Learning (RSL) suffer performance degradation in the presence of challenges like bad weather, illumination variations, occlusion, dynamic backgrounds and intermittent object motion. In the current work we propose a more accurate deep neural network based model for background-foreground separation and complete appearance generation of the foreground objects. Our proposed model, Guided Attention based Adversarial Model (GAAM), can efficiently extract pixel-level boundaries of the foreground objects for improved appearance generation. Unlike RSL methods our model extracts the binary information of foreground objects labeled as attention map which guides our generator network to segment the foreground objects from the complex background information. Wide range of experiments performed on the benchmark CDnet2014 dataset demonstrate the excellent performance of our proposed model.
First Page
181
Last Page
188
DOI
10.1109/ICCVW54120.2021.00025
Publication Date
11-24-2021
Keywords
Deep learning, Learning systems, Computer vision, Computational modeling, Dynamics, Lighting, Benchmark testing
Recommended Citation
M. Sultana, A. Mahmood, T. Bouwmans, M. H. Khan and S. Ki Jung, "Background/foreground separation: guided attention based adversarial modeling (GAAM) versus robust subspace learning methods," in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021, pp. 181-188, doi: 10.1109/ICCVW54120.2021.00025.
Comments
IR Deposit conditions: non-described