Adversarial Attacks and Batch Normalization: A Batch Statistics Perspective
Batch Normalization (BatchNorm) is an effective architectural component in deep learning models that helps to improve model performance and speed up training. However, it has also been found to increase the vulnerability of models to adversarial attacks. In this study, we investigate the mechanism behind this vulnerability and took first steps towards a solution called RobustNorm. We observed that adversarial inputs tend to shift the distributions of the output of the BatchNorm layer, leading to inaccurate train-time statistics and increased vulnerability. Through a series of experiments on various architectures and datasets, we confirm our hypothesis. We also demonstrate the effectiveness of RobustNorm in improving the robustness of models under adversarial perturbation while maintaining the benefits of BatchNorm.
adversarial robustness, Batch normalization, Feature extraction, Neural networks, Robustness, Sociology, Statistics, Training, transfer learning, Transfer learning
A. Muhammad, F. Shamshad and S. -H. Bae, "Adversarial Attacks and Batch Normalization: A Batch Statistics Perspective," in IEEE Access,, p. 1-1, March 2023, doi: 10.1109/ACCESS.2023.3250661.