Document Type
Conference Proceeding
Publication Title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Abstract
The success of deep learning based face recognition systems has given rise to serious privacy concerns due to their ability to enable unauthorized tracking of users in the digital world. Existing methods for enhancing privacy fail to generate 'naturalistic' images that can protect facial privacy without compromising user experience. We propose a novel two-step approach for facial privacy protection that relies on finding adversarial latent codes in the low- dimensional manifold of a pretrained generative model. The first step inverts the given face image into the latent space and finetunes the generative model to achieve an accurate reconstruction of the given image from its latent code. This step produces a good initialization, aiding the generation of high-quality faces that resemble the given identity. Subsequently, user-defined makeup text prompts and identity- preserving regularization are used to guide the search for adversarial codes in the latent space. Extensive experiments demonstrate that faces generated by our approach have stronger black-box transferability with an absolute gain of 12.06% over the state-of-the-art facial privacy protection approach under the face verification task. Finally, we demonstrate the effectiveness of the proposed approach for commercial face recognition systems. Our code is available at https://github.com/fahadshamshad/Clip2Protect.
First Page
20595
Last Page
20605
DOI
10.1109/CVPR52729.2023.01973
Publication Date
8-22-2023
Keywords
accountability, ethics in vision, fairness, privacy, Transparency
Recommended Citation
F. Shamshad et al., "CLIP2Protect: Protecting Facial Privacy Using Text-Guided Makeup via Adversarial Latent Search," Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2023-June, pp. 20595 - 20605, Aug 2023.
The definitive version is available at https://doi.org/10.1109/CVPR52729.2023.01973
Comments
Open Access version available on CVF.
Archived thanks to CVF
Uploaded May 13, 2024