@Article{YaxingWang2021, author="Yaxing Wang and Abel Gonzalez-Garcia and Luis Herranz and Joost Van de Weijer", title="Controlling biases and diversity in diverse image-to-image translation", journal="Computer Vision and Image Understanding", year="2021", volume="202", pages="103082", abstract="JCR 2019 Q2, IF=3.121The task of unpaired image-to-image translation is highly challenging due to the lack of explicit cross-domain pairs of instances. We consider here diverse image translation (DIT), an even more challenging setting in which an image can have multiple plausible translations. This is normally achieved by explicitly disentangling content and style in the latent representation and sampling different styles codes while maintaining the image content. Despite the success of current DIT models, they are prone to suffer from bias. In this paper, we study the problem of bias in image-to-image translation. Biased datasets may add undesired changes (e.g. change gender or race in face images) to the output translations as a consequence of the particular underlying visual distribution in the target domain. In order to alleviate the effects of this problem we propose the use of semantic constraints that enforce the preservation of desired image properties. Our proposed model is a step towards unbiased diverse image-to-image translation (UDIT), and results in less unwanted changes in the translated images while still performing the wanted transformation. Experiments on several heavily biased datasets show the effectiveness of the proposed techniques in different domains such as faces, objects, and scenes.", optnote="LAMP; 600.141; 600.109; 600.147", optnote="exported from refbase (http://refbase.cvc.uab.es/show.php?record=3464), last updated on Fri, 28 Jan 2022 10:10:56 +0100", opturl="https://doi.org/10.1016/j.cviu.2020.103082", file=":http://refbase.cvc.uab.es/files/WGH2021.pdf:PDF" }