Recurrence-based Disentanglement for Detecting Adversarial Attacks on Timeseries Classifiers

Document Type

Conference Proceeding

Publication Title

European Signal Processing Conference

Abstract

Time series classifiers based on deep neural networks (DNNs) are highly vulnerable to carefully crafted perturbations called adversarial attacks, which are capable of completely degrading their accuracy. The primary challenge in detecting such adversarial samples is the difficulty in disentangling the underlying signal from the added perturbations. In this work, we propose a novel technique for detecting adversarial attacks against deep timeseries classifiers. Firstly, we show that a recurrence plot (RP) representation can effectively disentangle adversarial perturbations in time series data as local artifacts in the image domain. Secondly, we demonstrate that these artifacts can be easily amplified or suppressed using image morphological operations, without impacting the true signal information. Consequently, the distributions of RP features (before and after morphological operations) do not change for benign samples, while they begin to diverge for adversarial samples. Finally, we train a normalcy model to encode the distribution of RP features of benign samples and employ outlier detection in the parameter space to detect adversarial samples. Evaluations based on four adversarial attacks (FGSM, BIM, MIM and PGD) and on all 85 datasets in the 2015 UCR TS archive, show that the proposed method outperforms the state-of-the-art and is 3.65× faster on average.

First Page

625

Last Page

629

DOI

10.23919/EUSIPCO58844.2023.10290062

Publication Date

11-1-2023

Keywords

Perturbation methods, Time series analysis, Europe, Artificial neural networks, Signal processing, Feature extraction, Anomaly detection

Comments

IR conditions: non-described

Share

COinS