Moving objects segmentation using generative adversarial modeling

Document Type

Article

Publication Title

Neurocomputing

Abstract

Moving Objects Segmentation (MOS) is a crucial step in various computer vision applications, such as visual object tracking, autonomous vehicles, human activity analysis, surveillance, and security. Existing MOS approaches suffer from performance degradation due to extreme challenging conditions in real world complex environments such as varying illumination conditions, camouflage objects, dynamic backgrounds, shadows, bad weathers and camera jitters. To address these problems we proposed a novel generative adversarial based framework for moving objects segmentation. Our framework works with one classifier discriminator, one representation learning network and one generator jointly trained to perform MOS in various challenging scenarios. During training the discriminator network acts as a decision maker between real and fake training samples using conditional least squares loss. While the representation learning network provides the difference between the deep features of real and fake training samples using content loss formulation. Another loss term we have exploited to train our generator network is the reconstruction loss that minimizes the difference between the spatial information of real and fake training samples. Moreover, we also propose a novel modified U-net architecture for our generator network showing improved performance over Vanilla U-net model. Experimental evaluations of our proposed method on four benchmark datasets in comparison with thirty-two existing methods has demonstrated the strength of our proposed model. © 2022 Elsevier B.V.

First Page

240

Last Page

251

DOI

10.1016/j.neucom.2022.07.081

Publication Date

9-28-2022

Keywords

Background modelling, Generative adversarial network, Moving objects segmentation, Decision making, Fake detection, Learning systems, Sampling

Comments

IR Deposit conditions:

OA version (pathway b) Accepted version

24 month embargo

License: CC BY-NC-ND

Must link to publisher version with DOI

Share

COinS