TY - CONF AU - German Ros AU - Laura Sellart AU - Joanna Materzynska AU - David Vazquez AU - Antonio Lopez A2 - CVPR PY - 2016// TI - The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes BT - 29th IEEE Conference on Computer Vision and Pattern Recognition SP - 3234 EP - 3243 KW - Domain Adaptation KW - Autonomous Driving KW - Virtual Data KW - Semantic Segmentation N2 - Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task L1 - http://refbase.cvc.uab.es/files/RSM2016.pdf UR - http://dx.doi.org/10.1109/CVPR.2016.352 N1 - ADAS; 600.085; 600.082; 600.076 ID - German Ros2016 ER -