Document Type

Article

Publication Title

arXiv

Abstract

While the untargeted black-box transferability of adversarial perturbations has been extensively studied before, changing an unseen model's decisions to a specific 'targeted' class remains a challenging feat. In this paper, we propose a new generative approach for highly transferable targeted perturbations (TTP). We note that the existing methods are less suitable for this task due to their reliance on class-boundary information that changes from one model to another, thus reducing transferability. In contrast, our approach matches the perturbed image 'distribution' with that of the target class, leading to high targeted transferability rates. To this end, we propose a new objective function that not only aligns the global distributions of source and target images, but also matches the local neighbourhood structure between the two domains. Based on the proposed objective, we train a generator function that can adaptively synthesize perturbations specific to a given input. Our generative approach is independent of the source or target domain labels, while consistently performs well against state-of-the-art methods on a wide range of attack settings. As an example, we achieve 32.63% target transferability from (an adversarially weak) VGG19BN to (a strong) WideResNet on ImageNet val. set, which is 4× higher than the previous best generative attack and 16× better than instance-specific iterative attack. Code is available at: https://github.com/Muzammal-Naseer/TTP. © 2021, CC BY.

DOI

doi.org/10.48550/arXiv.2103.14641

Publication Date

3-26-2021

Keywords

Computer Vision and Pattern Recognition (cs.CV)

Comments

Preprint: arXiv

Archived with thanks to arXiv

Preprint License: CC by 4.0

Uploaded 24 March 2022

Share

COinS