%0 Conference Proceedings %T The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes %A German Ros %A Laura Sellart %A Joanna Materzynska %A David Vazquez %A Antonio Lopez %B 29th IEEE Conference on Computer Vision and Pattern Recognition %D 2016 %F German Ros2016 %O ADAS; 600.085; 600.082; 600.076 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2739), last updated on Wed, 14 Nov 2018 13:47:20 +0100 %X Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task %K Domain Adaptation %K Autonomous Driving %K Virtual Data %K Semantic Segmentation %U http://refbase.cvc.uab.es/files/RSM2016.pdf %U http://dx.doi.org/10.1109/CVPR.2016.352 %P 3234-3243