On the Effects of Filtering Methods on Adversarial Timeseries Data

Document Type

Conference Proceeding

Publication Title

GeoPrivacy 2023 - Proceedings of the 1st ACM SIGSPATIAL International Workshop on GeoPrivacy and Data Utility for Smart Societies

Abstract

Adversarial machine learning is very well studied in image classification. On the other hand, other domains such as deep timeseries classification have not received similar levels of attention, leaving them disproportionately vulnerable. Specifically, adversarial defenses for deep timeseries classifiers have only been investigated in the context of attack detection. However, the proposed methods do not perform well and fail to generalize across attacks, affecting their real-world applicability. In this work we investigate adversarial defense via input data purification for deep timeseries classifiers. We subject clean and adversarially-perturbed univariate timeseries data to 4 simple filtering methods with a view to establishing whether such methods may potentially be used as purification-based adversarial defenses. In experiments involving 5 publicly-available datasets, we identify and compare the benefits of various filtering techniques. Thereafter we discuss our results and provide directions for further investigation.

First Page

5

Last Page

9

DOI

10.1145/3615889.3628509

Publication Date

11-13-2023

Keywords

adversarial, defense, filtering, timeseries

Comments

IR conditions: non-described

Share

COinS