Self-supervised predictive convolutional attentive block for anomaly detection

Document Type

Article

Publication Title

arXiv

Abstract

Anomaly detection is commonly pursued as a one-class classification problem, where models can only learn from normal training samples, while being evaluated on both normal and abnormal test samples. Among the successful approaches for anomaly detection, a distinguished category of methods relies on predicting masked information (e.g. patches, future frames, etc.) and leveraging the reconstruction error with respect to the masked information as an abnormality score. Different from related methods, we propose to integrate the reconstruction-based functionality into a novel self-supervised predictive architectural building block. The proposed self-supervised block is generic and can easily be incorporated into various state-of-the-art anomaly detection methods. Our block starts with a convolutional layer with dilated filters, where the center area of the receptive field is masked. The resulting activation maps are passed through a channel attention module. Our block is equipped with a loss that minimizes the reconstruction error with respect to the masked area in the receptive field. We demonstrate the generality of our block by integrating it into several state-of-the-art frameworks for anomaly detection on image and video, providing empirical evidence that shows considerable performance improvements on MVTec AD, Avenue, and ShanghaiTech. Copyright © 2021, The Authors. All rights reserved.

DOI

doi.org/10.48550/arXiv.2111.09099

Publication Date

11-17-2021

Keywords

Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)

Comments

Preprint: arXiv

Share

COinS