Document Type

Conference Proceeding

Publication Title

Proceedings of Machine Learning Research

Abstract

Deep neural networks have been found to be vulnerable to adversarial noise. Recent works show that exploring the impact of adversarial noise on intrinsic components of data can help improve adversarial robustness. However, the pattern closely related to human perception has not been deeply studied. In this paper, inspired by the cognitive science, we investigate the interference of adversarial noise from the perspective of image phase, and find ordinarily-trained models lack enough robustness against phase-level perturbations. Motivated by this, we propose a joint adversarial defense method: a phase-level adversarial training mechanism to enhance the adversarial robustness on the phase pattern; an amplitude-based pre-processing operation to mitigate the adversarial perturbation in the amplitude pattern. Experimental results show that the proposed method can significantly improve the robust accuracy against multiple attacks and even adaptive attacks. In addition, ablation studies demonstrate the effectiveness of our defense strategy.

First Page

42724

Last Page

42741

Publication Date

7-2023

Keywords

Cognitive science, Defense strategy, Human perception, Phase levels, Phase patterns, Pre-processing operations

Comments

Open Access version from PMLR

Uploaded on June 13, 2024

Share

COinS