Stabilizing Adversarially Learned One-Class Novelty Detection Using Pseudo Anomalies

Document Type

Article

Publication Title

IEEE Transactions on Image Processing

Abstract

Recently, anomaly scores have been formulated using reconstruction loss of the adversarially learned generators and/or classification loss of discriminators. Unavailability of anomaly examples in the training data makes optimization of such networks challenging. Attributed to the adversarial training, performance of such models fluctuates drastically with each training step, making it difficult to halt the training at an optimal point. In the current study, we propose a robust anomaly detection framework that overcomes such instability by transforming the fundamental role of the discriminator from identifying real vs. fake data to distinguishing good vs. bad quality reconstructions. For this purpose, we propose a method that utilizes the current state as well as an old state of the same generator to create good and bad quality reconstruction examples. The discriminator is trained on these examples to detect the subtle distortions that are often present in the reconstructions of anomalous data. In addition, we propose an efficient generic criterion to stop the training of our model, ensuring elevated performance. Extensive experiments performed on six datasets across multiple domains including image and video based anomaly detection, medical diagnosis, and network security, have demonstrated excellent performance of our approach.

First Page

5963

Last Page

5975

DOI

10.1109/TIP.2022.3204217

Publication Date

9-12-2022

Keywords

adversarial learning, anomaly detection, Novelty detection, one-class classification, outliers detection, stabilizing adversarial models

Comments

IR Deposit conditions:

OA version (pathway a) Accepted version

No embargo

When accepted for publication, set statement to accompany deposit (see policy)

Must link to publisher version with DOI

Publisher copyright and source must be acknowledged

Share

COinS