Learning Efficient GANs for Image Translation via Differentiable Masks and Co-Attention Distillation
Document Type
Article
Publication Title
IEEE Transactions on Multimedia
Abstract
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices. Prevalent methods for CNN compression cannot be directly applied to GANs due to the peculiarties of GAN tasks and the unstable adversarial training. To solve these, in this paper, we introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-attention Distillation. The former searches for a light-weight generator architecture in a training-adaptive manner. To overcome channel inconsistency when pruning the residual connections, an adaptive cross-block group sparsity is further incorporated. The latter simultaneously distills informative attention maps from both the generator and discriminator of a pre-trained model to the searched generator, effectively stabilizing the adversarial training of our light-weight model. Experiments show that DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13× and that of Pix2Pix by 4× while retaining a comparable performance against the full model.
First Page
3180
Last Page
3189
DOI
10.1109/TMM.2022.3156699
Publication Date
3-7-2022
Keywords
GAN compression, Generative adversarial networks, image translation, knowledge distillation, network pruning
Recommended Citation
S. Li, M. Lin, Y. Wang, F. Chao, L. Shao and R. Ji, "Learning Efficient GANs for Image Translation via Differentiable Masks and Co-Attention Distillation," in IEEE Transactions on Multimedia, vol. 25, pp. 3180-3189, 2023, doi: 10.1109/TMM.2022.3156699.
Additional Links
DOI link: https://doi.org/10.1109/TMM.2022.3156699
Comments
IR Deposit conditions:
OA version (pathway a) Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged