Document Type
Conference Proceeding
Publication Title
Proceedings of Machine Learning Research
Abstract
Deep neural networks (DNNs) are vulnerable to adversarial noise. Denoising model-based defense is a major protection strategy. However, denoising models may fail and induce negative effects in fully white-box scenarios. In this work, we start from the latent inherent properties of adversarial samples to break the limitations. Unlike solely learning a mapping from adversarial samples to natural samples, we aim to achieve denoising by destroying the spatial characteristics of adversarial noise and preserving the robust features of natural information. Motivated by this, we propose a defense based on information discard and robust representation restoration. Our method utilize complementary masks to disrupt adversarial noise and guided denoising models to restore robust-predictive representations from masked samples. Experimental results show that our method has competitive performance against white-box attacks and effectively reverses the negative effect of denoising models.
First Page
42517
Last Page
42530
Publication Date
7-23-2023
Keywords
Deep neural networks, Network security, Competitive performance, Complementary masks, De-noising, Model-based OPC, Property, Protection strategy, Spatial characteristics, White box
Recommended Citation
D. Zhou et al., "Eliminating Adversarial Noise via Information Discard and Robust Representation Restoration," Proceedings of Machine Learning Research, vol. 202, pp. 42517 - 42530, Jul 2023.
Comments
Open Access version from PMLR
Uploaded on June 11, 2024