%0 Conference Proceedings %T Semantic Road Segmentation via Multi-Scale Ensembles of Learned Features %A Jose Manuel Alvarez %A Y. LeCun %A Theo Gevers %A Antonio Lopez %B 12th European Conference on Computer Vision – Workshops and Demonstrations %D 2012 %V 7584 %I Springer Berlin Heidelberg %@ 0302-9743 %@ 978-3-642-33867-0 %F Jose Manuel Alvarez2012 %O ADAS;ISE %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2187), last updated on Tue, 18 Oct 2016 13:16:37 +0200 %X Semantic segmentation refers to the process of assigning an object label (e.g., building, road, sidewalk, car, pedestrian) to every pixel in an image. Common approaches formulate the task as a random field labeling problem modeling the interactions between labels by combining local and contextual features such as color, depth, edges, SIFT or HoG. These models are trained to maximize the likelihood of the correct classification given a training set. However, these approaches rely on hand–designed features (e.g., texture, SIFT or HoG) and a higher computational time required in the inference process.Therefore, in this paper, we focus on estimating the unary potentials of a conditional random field via ensembles of learned features. We propose an algorithm based on convolutional neural networks to learn local features from training data at different scales and resolutions. Then, diversification between these features is exploited using a weighted linear combination. Experiments on a publicly available database show the effectiveness of the proposed method to perform semantic road scene segmentation in still images. The algorithm outperforms appearance based methods and its performance is similar compared to state–of–the–art methods using other sources of information such as depth, motion or stereo. %K road detection %U http://refbase.cvc.uab.es/files/alg2012a.pdf %U http://dx.doi.org/10.1007/978-3-642-33868-7_58 %P 586-595