Image super-resolution as a defense against adversarial attacks

Aamir Mustafa, Inception Institute of Artificial Intelligence
Salman H. Khan, Inception Institute of Artificial Intelligence
Munawar Hayat, Inception Institute of Artificial Intelligence
Jianbing Shen, Inception Institute of Artificial Intelligence
Ling Shao, Inception Institute of Artificial Intelligence


Convolutional Neural Networks have achieved significant success across multiple computer vision tasks. However, they are vulnerable to carefully crafted, human-imperceptible adversarial noise patterns which constrain their deployment in critical security-sensitive systems. This paper proposes a computationally efficient image enhancement approach that provides a strong defense mechanism to effectively mitigate the effect of such adversarial perturbations. We show that deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classification towards correct classes. A distinguishing feature of our approach is that, in addition to providing robustness against attacks, it simultaneously enhances image quality and retains models performance on clean images. Furthermore, the proposed method does not modify the classifier or requires a separate mechanism to detect adversarial images. The effectiveness of the scheme has been demonstrated through extensive experiments, where it has proven a strong defense in gray-box settings. The proposed scheme is simple and has the following advantages: 1) it does not require any model training or parameter optimization, 2) it complements other existing defense mechanisms, 3) it is agnostic to the attacked model and attack type, and 4) it provides superior performance across all popular attack algorithms. Our codes are publicly available at