TY - CONF AU - Hector Laria Mantecon AU - Yaxing Wang AU - Joost Van de Weijer AU - Bogdan Raducanu A2 - CVPRW PY - 2022// TI - Transferring Unconditional to Conditional GANs With Hyper-Modulation BT - IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) N2 - GANs have matured in recent years and are able to generate high-resolution, realistic images. However, the computational resources and the data required for the training of high-quality GANs are enormous, and the study of transfer learning of these models is therefore an urgent topic. Many of the available high-quality pretrained GANs are unconditional (like StyleGAN). For many applications, however, conditional GANs are preferable, because they provide more control over the generation process, despite often suffering more training difficulties. Therefore, in this paper, we focus on transferring from high-quality pretrained unconditional GANs to conditional GANs. This requires architectural adaptation of the pretrained GAN to perform the conditioning. To this end, we propose hyper-modulated generative networks that allow for shared and complementary supervision. To prevent the additional weights of the hypernetwork to overfit, with subsequent mode collapse on small target domains, we introduce a self-initialization procedure that does not require any real data to initialize the hypernetwork parameters. To further improve the sample efficiency of the transfer, we apply contrastive learning in the discriminator, which effectively works on very limited batch sizes. In extensive experiments, we validate the efficiency of the hypernetworks, self-initialization and contrastive loss for knowledge transfer on standard benchmarks. N1 - LAMP; 600.147; 602.200 ID - Hector Laria Mantecon2022 ER -