Inter-feature Relationship Certifies Robust Generalization of Adversarial Training
Document Type
Article
Publication Title
International Journal of Computer Vision
Abstract
Whilst adversarial training has been shown as a promising wisdom to promote model robustness in computer vision and machine learning, adversarially trained models often suffer from poor robust generalization on unseen adversarial examples. Namely, there still remains a big gap between the performance on training and test adversarial examples. In this paper, we propose to tackle this issue from a new perspective of the inter-feature relationship. Specifically, we aim to generate adversarial examples which maximize the loss function while maintaining the inter-feature relationship of natural data as well as penalizing the correlation distance between natural features and adversarial counterparts. As a key contribution, we prove that training with such examples while penalizing the distance between correlations can help promote both the generalization on natural and adversarial examples theoretically. We empirically validate our method through extensive experiments over different vision datasets (CIFAR-10, CIFAR-100, and SVHN), against several competitive methods. Our method substantially outperforms the baseline adversarial training by a large margin, especially for PGD20 on CIFAR-10, CIFAR-100, and SVHN with around 20%, 15% and 29% improvements.
DOI
10.1007/s11263-024-02111-w
Publication Date
1-1-2024
Keywords
Adversarial examples, Adversarial training, Robustness
Recommended Citation
S. Zhang et al., "Inter-feature Relationship Certifies Robust Generalization of Adversarial Training," International Journal of Computer Vision, Jan 2024.
The definitive version is available at https://doi.org/10.1007/s11263-024-02111-w