Self-Adaptive Perturbation Radii for Adversarial Training
Document Type
Conference Proceeding
Publication Title
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Abstract
Adversarial training has been shown to be the most popular and effective technique to protect models from imperceptible adversarial samples. Despite its success, it also accompanies the significant performance degeneration to clean data. To achieve a good performance on both clean and adversarial samples, the main effort is searching for an adaptive perturbation radius for each training sample. However, this method suffers from a conflict between exact searching and computational overhead. To address this conflict, in this paper, firstly we show the superiority of adaptive perturbation radii on the accuracy and robustness respectively. Then we propose our novel self-adaptive adjustment framework for perturbation radii without tedious searching. We also discuss this framework on both deep neural networks (DNNs) and kernel support vector machines (SVMs). Finally, extensive experimental results show that our framework can improve adversarial robustness without compromising the natural generalization. It is also competitive with existing searching strategies in terms of running time.
First Page
2570
Last Page
2581
DOI
10.1145/3580305.3599495
Publication Date
8-4-2023
Keywords
adversarial training, self-adaptive perturbation radii
Recommended Citation
H. Wu, W. Shi, C. Zhang, and B. Gu, "Self-Adaptive Perturbation Radii for Adversarial Training", In Proceedings of the 29th ACM SIGKDD Conf. on Knowledge Discovery and Data Mining (KDD '23), New York, NY, USA, pp. 2570–2581, Aug 2023. doi:10.1145/3580305.3599495
Additional Links
DOI link: https://doi.org/10.1145/3580305.3599495
Comments
IR conditions: non-described