%0 Conference Proceedings %T GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images %A Lei Kang %A Pau Riba %A Yaxing Wang %A Marçal Rusiñol %A Alicia Fornes %A Mauricio Villegas %B 16th European Conference on Computer Vision %D 2020 %F Lei Kang2020 %O DAG; 600.140; 600.121; 600.129 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3426), last updated on Fri, 26 Feb 2021 13:55:44 +0100 %X Although current image generation methods have reached impressive quality levels, they are still unable to produce plausible yet diverse images of handwritten words. On the contrary, when writing by hand, a great variability is observed across different writers, and even when analyzing words scribbled by the same individual, involuntary variations are conspicuous. In this work, we take a step closer to producing realistic and varied artificially rendered handwritten words. We propose a novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content. Our generator is guided by three complementary learning objectives: to produce realistic images, to imitate a certain handwriting style and to convey a specific textual content. Our model is unconstrained to any predefined vocabulary, being able to render whatever input word. Given a sample writer, it is also able to mimic its calligraphic features in a few-shot setup. We significantly advance over prior art and demonstrate with qualitative, quantitative and human-based evaluations the realistic aspect of our synthetically produced images. %U http://refbase.cvc.uab.es/files/KPW2020.pdf