Adversarial Attacks and Batch Normalization: A Batch Statistics Perspective
Document Type
Article
Publication Title
IEEE Access
Abstract
Batch Normalization (BatchNorm) is an effective architectural component in deep learning models that helps to improve model performance and speed up training. However, it has also been found to increase the vulnerability of models to adversarial attacks. In this study, we investigate the mechanism behind this vulnerability and took first steps towards a solution called RobustNorm. We observed that adversarial inputs tend to shift the distributions of the output of the BatchNorm layer, leading to inaccurate train-time statistics and increased vulnerability. Through a series of experiments on various architectures and datasets, we confirm our hypothesis. We also demonstrate the effectiveness of RobustNorm in improving the robustness of models under adversarial perturbation while maintaining the benefits of BatchNorm.
First Page
1
Last Page
1
DOI
10.1109/ACCESS.2023.3250661
Publication Date
3-1-2023
Keywords
adversarial robustness, Batch normalization, Feature extraction, Neural networks, Robustness, Sociology, Statistics, Training, transfer learning, Transfer learning
Recommended Citation
A. Muhammad, F. Shamshad and S. -H. Bae, "Adversarial Attacks and Batch Normalization: A Batch Statistics Perspective," in IEEE Access,, p. 1-1, March 2023, doi: 10.1109/ACCESS.2023.3250661.
Comments
IR Deposit conditions:
OA ver: Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged