|   | 
Details
   web
Records
Author (up) Jose Manuel Alvarez; Antonio Lopez
Title Model-based road detection using shadowless features and on-line learning Type Miscellaneous
Year 2009 Publication BMVA one–day technical meeting on vision for automotive applications Abbreviated Journal
Volume Issue Pages
Keywords road detection
Abstract
Address London, UK
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number ADAS @ adas @ AlA2009 Serial 1272
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Antonio Lopez
Title Road Detection Based on Illuminant Invariance Type Journal Article
Year 2011 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 12 Issue 1 Pages 184-193
Keywords road detection
Abstract By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number ADAS @ adas @ AlL2011 Serial 1456
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Antonio Lopez
Title Photometric Invariance by Machine Learning Type Book Chapter
Year 2012 Publication Color in Computer Vision: Fundamentals and Applications Abbreviated Journal
Volume 7 Issue Pages 113-134
Keywords road detection
Abstract
Address
Corporate Author Thesis
Publisher iConcept Press Ltd Place of Publication Editor Theo Gevers, Arjan Gijsenij, Joost van de Weijer, Jan-Mark Geusebroek
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-0-470-89084-4 Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ AlL2012 Serial 2186
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Antonio Lopez; Ramon Baldrich
Title Shadow Resistant Road Segmentation from a Mobile Monocular System Type Conference Article
Year 2007 Publication 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:9–16 Abbreviated Journal
Volume Issue Pages
Keywords road detection
Abstract
Address Gerona (Spain)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS;CIC Approved no
Call Number ADAS @ adas @ ALB2007 Serial 943
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Antonio Lopez; Ramon Baldrich
Title Illuminant Invariant Model-Based Road Segmentation Type Conference Article
Year 2008 Publication IEEE Intelligent Vehicles Symposium, Abbreviated Journal
Volume Issue Pages 1155–1180
Keywords road detection
Abstract
Address Eindhoven (The Netherlands)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS;CIC Approved no
Call Number ADAS @ adas @ ALB2008 Serial 1045
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Antonio Lopez; Theo Gevers; Felipe Lumbreras
Title Combining Priors, Appearance and Context for Road Detection Type Journal Article
Year 2014 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 15 Issue 3 Pages 1168-1178
Keywords Illuminant invariance; lane markings; road detection; road prior; road scene understanding; vanishing point; 3-D scene layout
Abstract Detecting the free road surface ahead of a moving vehicle is an important research topic in different areas of computer vision, such as autonomous driving or car collision warning.
Current vision-based road detection methods are usually based solely on low-level features. Furthermore, they generally assume structured roads, road homogeneity, and uniform lighting conditions, constraining their applicability in real-world scenarios. In this paper, road priors and contextual information are introduced for road detection. First, we propose an algorithm to estimate road priors online using geographical information, providing relevant initial information about the road location. Then, contextual cues, including horizon lines, vanishing points, lane markings, 3-D scene layout, and road geometry, are used in addition to low-level cues derived from the appearance of roads. Finally, a generative model is used to combine these cues and priors, leading to a road detection method that is, to a large degree, robust to varying imaging conditions, road types, and scenarios.
Address
Corporate Author Thesis
Publisher Place of Publication Editor IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1524-9050 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.076;ISE Approved no
Call Number Admin @ si @ ALG2014 Serial 2501
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Felipe Lumbreras; Antonio Lopez; Theo Gevers
Title Understanding Road Scenes using Visual Cues Type Miscellaneous
Year 2012 Publication European Conference on Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract DEMO
Address Florence; Italy
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ ALL2012 Serial 2795
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Felipe Lumbreras; Theo Gevers; Antonio Lopez
Title Geographic Information for vision-based Road Detection Type Conference Article
Year 2010 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 621–626
Keywords road detection
Abstract Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach.
Address San Diego; CA; USA
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IV
Notes ADAS;ISE Approved no
Call Number ADAS @ adas @ ALG2010 Serial 1428
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Ferran Diego; Joan Serrat; Antonio Lopez
Title Automatic Ground-truthing using video registration for on-board detection algorithms Type Conference Article
Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 4389 - 4392
Keywords
Abstract Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.
Address Cairo, Egypt
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1522-4880 ISBN 978-1-4244-5653-6 Medium
Area Expedition Conference ICIP
Notes ADAS Approved no
Call Number ADAS @ adas @ ADS2009 Serial 1201
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Theo Gevers; Antonio Lopez
Title Learning Photometric Invariance from Diversified Color Model Ensembles Type Conference Article
Year 2009 Publication 22nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 565–572
Keywords road detection
Abstract Color is a powerful visual cue for many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions affecting negatively the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, those reflection models might be too restricted to model real-world scenes in which different reflectance mechanisms may hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is taken on input composed of both color variants and invariants. Then, the proposed method combines and weights these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, the fusion method uses a multi-view approach to minimize the estimation error. In this way, the method is robust to data uncertainty and produces properly diversified color invariant ensembles. Experiments are conducted on three different image datasets to validate the method. From the theoretical and experimental results, it is concluded that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning. Further, the method outperforms state-of- the-art detection techniques in the field of object, skin and road recognition.
Address Miami (USA)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4244-3992-8 Medium
Area Expedition Conference CVPR
Notes ADAS;ISE Approved no
Call Number ADAS @ adas @ AGL2009 Serial 1169
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Theo Gevers; Antonio Lopez
Title 3D Scene Priors for Road Detection Type Conference Article
Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 57–64
Keywords road detection
Abstract Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.
Address San Francisco; CA; USA; June 2010
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium
Area Expedition Conference CVPR
Notes ADAS;ISE Approved no
Call Number ADAS @ adas @ AGL2010a Serial 1302
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Theo Gevers; Antonio Lopez
Title Learning photometric invariance for object detection Type Journal Article
Year 2010 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 90 Issue 1 Pages 45-61
Keywords road detection
Abstract Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously.
Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time.
Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes ADAS;ISE Approved no
Call Number ADAS @ adas @ AGL2010c Serial 1451
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Theo Gevers; Antonio Lopez
Title Evaluating Color Representation for Online Road Detection Type Conference Article
Year 2013 Publication ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars Abbreviated Journal
Volume Issue Pages 594-595
Keywords
Abstract Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired
using an on-board camera in different real-driving situations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVVT:E2M
Notes ADAS;ISE Approved no
Call Number Admin @ si @ AGL2013 Serial 2794
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Theo Gevers; Ferran Diego; Antonio Lopez
Title Road Geometry Classification by Adaptative Shape Models Type Journal Article
Year 2013 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 14 Issue 1 Pages 459-468
Keywords road detection
Abstract Vision-based road detection is important for different applications in transportation, such as autonomous driving, vehicle collision warning, and pedestrian crossing detection. Common approaches to road detection are based on low-level road appearance (e.g., color or texture) and neglect of the scene geometry and context. Hence, using only low-level features makes these algorithms highly depend on structured roads, road homogeneity, and lighting conditions. Therefore, the aim of this paper is to classify road geometries for road detection through the analysis of scene composition and temporal coherence. Road geometry classification is proposed by building corresponding models from training images containing prototypical road geometries. We propose adaptive shape models where spatial pyramids are steered by the inherent spatial structure of road images. To reduce the influence of lighting variations, invariant features are used. Large-scale experiments show that the proposed road geometry classifier yields a high recognition rate of 73.57% ± 13.1, clearly outperforming other state-of-the-art methods. Including road shape information improves road detection results over existing appearance-based methods. Finally, it is shown that invariant features and temporal information provide robustness against disturbing imaging conditions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1524-9050 ISBN Medium
Area Expedition Conference
Notes ADAS;ISE Approved no
Call Number Admin @ si @ AGD2013;; ADAS @ adas @ Serial 2269
Permanent link to this record
 

 
Author (up) Jose Manuel Alvarez; Theo Gevers; Y. LeCun; Antonio Lopez
Title Road Scene Segmentation from a Single Image Type Conference Article
Year 2012 Publication 12th European Conference on Computer Vision Abbreviated Journal
Volume 7578 Issue VII Pages 376-389
Keywords road detection
Abstract Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding.
In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images.
From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7% compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8% compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined
Address Florence, Italy
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-33785-7 Medium
Area Expedition Conference ECCV
Notes ADAS;ISE Approved no
Call Number Admin @ si @ AGL2012; ADAS @ adas @ agl2012a Serial 2022
Permanent link to this record