%0 Conference Proceedings %T Learning Cloth Dynamics: 3D+Texture Garment Reconstruction Benchmark %A Meysam Madadi %A Hugo Bertiche %A Wafa Bouzouita %A Isabelle Guyon %A Sergio Escalera %B Proceedings of Machine Learning Research %D 2021 %V 133 %F Meysam Madadi2021 %O HUPBA; no proj %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3655), last updated on Tue, 25 Oct 2022 13:00:49 +0200 %X Human avatars are important targets in many computer applications. Accurately tracking, capturing, reconstructing and animating the human body, face and garments in 3D are critical for human-computer interaction, gaming, special effects and virtual reality. In the past, this has required extensive manual animation. Regardless of the advances in human body and face reconstruction, still modeling, learning and analyzing human dynamics need further attention. In this paper we plan to push the research in this direction, e.g. understanding human dynamics in 2D and 3D, with special attention to garments. We provide a large-scale dataset (more than 2M frames) of animated garments with variable topology and type, calledCLOTH3D++. The dataset contains RGBA video sequences paired with its corresponding 3D data. We pay special care to garment dynamics and realistic rendering of RGB data, including lighting, fabric type and texture. With this dataset, we hold a competition at NeurIPS2020. We design three tracks so participants can compete to develop the best method to perform 3D garment reconstruction in a sequence from (1) 3D-to-3D garments, (2) RGB-to-3D garments, and (3) RGB-to-3D garments plus texture. We also provide a baseline method, based on graph convolutional networks, for each track. Baseline results show that there is a lot of room for improvements. However, due to the challenging nature of the problem, no participant could outperform the baselines. %U https://proceedings.mlr.press/v133/madadi21a.html %U http://refbase.cvc.uab.es/files/MBB2021.pdf %P 57-76