toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Jose Manuel Alvarez; Antonio Lopez edit  openurl
  Title Model-based road detection using shadowless features and on-line learning Type Miscellaneous
  Year 2009 Publication BMVA one–day technical meeting on vision for automotive applications Abbreviated Journal  
  Volume Issue Pages  
  Keywords (up) road detection  
  Abstract  
  Address London, UK  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ AlA2009 Serial 1272  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title 3D Scene Priors for Road Detection Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 57–64  
  Keywords (up) road detection  
  Abstract Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010a Serial 1302  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Felipe Lumbreras; Theo Gevers; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Geographic Information for vision-based Road Detection Type Conference Article
  Year 2010 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 621–626  
  Keywords (up) road detection  
  Abstract Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach.  
  Address San Diego; CA; USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IV  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ ALG2010 Serial 1428  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  openurl
  Title Learning photometric invariance for object detection Type Journal Article
  Year 2010 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 90 Issue 1 Pages 45-61  
  Keywords (up) road detection  
  Abstract Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously.
Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time.
Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods
 
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0920-5691 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010c Serial 1451  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Antonio Lopez edit   pdf
doi  openurl
  Title Road Detection Based on Illuminant Invariance Type Journal Article
  Year 2011 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 12 Issue 1 Pages 184-193  
  Keywords (up) road detection  
  Abstract By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ AlL2011 Serial 1456  
Permanent link to this record
 

 
Author Fadi Dornaika; Jose Manuel Alvarez; Angel Sappa; Antonio Lopez edit   pdf
doi  openurl
  Title A New Framework for Stereo Sensor Pose through Road Segmentation and Registration Type Journal Article
  Year 2011 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 12 Issue 4 Pages 954-966  
  Keywords (up) road detection  
  Abstract This paper proposes a new framework for real-time estimation of the onboard stereo head's position and orientation relative to the road surface, which is required for any advanced driver-assistance application. This framework can be used with all road types: highways, urban, etc. Unlike existing works that rely on feature extraction in either the image domain or 3-D space, we propose a framework that directly estimates the unknown parameters from the stream of stereo pairs' brightness. The proposed approach consists of two stages that are invoked for every stereo frame. The first stage segments the road region in one monocular view. The second stage estimates the camera pose using a featureless registration between the segmented monocular road region and the other view in the stereo pair. This paper has two main contributions. The first contribution combines a road segmentation algorithm with a registration technique to estimate the online stereo camera pose. The second contribution solves the registration using a featureless method, which is carried out using two different optimization techniques: 1) the differential evolution algorithm and 2) the Levenberg-Marquardt (LM) algorithm. We provide experiments and evaluations of performance. The results presented show the validity of our proposed framework.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ DAS2011; ADAS @ adas @ das2011a Serial 1833  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Ferran Diego; Antonio Lopez edit   pdf
doi  openurl
  Title Road Geometry Classification by Adaptative Shape Models Type Journal Article
  Year 2013 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 14 Issue 1 Pages 459-468  
  Keywords (up) road detection  
  Abstract Vision-based road detection is important for different applications in transportation, such as autonomous driving, vehicle collision warning, and pedestrian crossing detection. Common approaches to road detection are based on low-level road appearance (e.g., color or texture) and neglect of the scene geometry and context. Hence, using only low-level features makes these algorithms highly depend on structured roads, road homogeneity, and lighting conditions. Therefore, the aim of this paper is to classify road geometries for road detection through the analysis of scene composition and temporal coherence. Road geometry classification is proposed by building corresponding models from training images containing prototypical road geometries. We propose adaptive shape models where spatial pyramids are steered by the inherent spatial structure of road images. To reduce the influence of lighting variations, invariant features are used. Large-scale experiments show that the proposed road geometry classifier yields a high recognition rate of 73.57% ± 13.1, clearly outperforming other state-of-the-art methods. Including road shape information improves road detection results over existing appearance-based methods. Finally, it is shown that invariant features and temporal information provide robustness against disturbing imaging conditions.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;ISE Approved no  
  Call Number Admin @ si @ AGD2013;; ADAS @ adas @ Serial 2269  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Y. LeCun; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Road Scene Segmentation from a Single Image Type Conference Article
  Year 2012 Publication 12th European Conference on Computer Vision Abbreviated Journal  
  Volume 7578 Issue VII Pages 376-389  
  Keywords (up) road detection  
  Abstract Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding.
In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images.
From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7% compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8% compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined
 
  Address Florence, Italy  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-33785-7 Medium  
  Area Expedition Conference ECCV  
  Notes ADAS;ISE Approved no  
  Call Number Admin @ si @ AGL2012; ADAS @ adas @ agl2012a Serial 2022  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Antonio Lopez edit  doi
isbn  openurl
  Title Photometric Invariance by Machine Learning Type Book Chapter
  Year 2012 Publication Color in Computer Vision: Fundamentals and Applications Abbreviated Journal  
  Volume 7 Issue Pages 113-134  
  Keywords (up) road detection  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher iConcept Press Ltd Place of Publication Editor Theo Gevers, Arjan Gijsenij, Joost van de Weijer, Jan-Mark Geusebroek  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-0-470-89084-4 Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ AlL2012 Serial 2186  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Y. LeCun; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Semantic Road Segmentation via Multi-Scale Ensembles of Learned Features Type Conference Article
  Year 2012 Publication 12th European Conference on Computer Vision – Workshops and Demonstrations Abbreviated Journal  
  Volume 7584 Issue Pages 586-595  
  Keywords (up) road detection  
  Abstract Semantic segmentation refers to the process of assigning an object label (e.g., building, road, sidewalk, car, pedestrian) to every pixel in an image. Common approaches formulate the task as a random field labeling problem modeling the interactions between labels by combining local and contextual features such as color, depth, edges, SIFT or HoG. These models are trained to maximize the likelihood of the correct classification given a training set. However, these approaches rely on hand–designed features (e.g., texture, SIFT or HoG) and a higher computational time required in the inference process.
Therefore, in this paper, we focus on estimating the unary potentials of a conditional random field via ensembles of learned features. We propose an algorithm based on convolutional neural networks to learn local features from training data at different scales and resolutions. Then, diversification between these features is exploited using a weighted linear combination. Experiments on a publicly available database show the effectiveness of the proposed method to perform semantic road scene segmentation in still images. The algorithm outperforms appearance based methods and its performance is similar compared to state–of–the–art methods using other sources of information such as depth, motion or stereo.
 
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-33867-0 Medium  
  Area Expedition Conference ECCVW  
  Notes ADAS;ISE Approved no  
  Call Number Admin @ si @ ALG2012; ADAS @ adas Serial 2187  
Permanent link to this record
 

 
Author Xinhang Song; Shuqiang Jiang; Luis Herranz edit   pdf
doi  openurl
  Title Combining Models from Multiple Sources for RGB-D Scene Recognition Type Conference Article
  Year 2017 Publication 26th International Joint Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 4523-4529  
  Keywords (up) Robotics and Vision; Vision and Perception  
  Abstract Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.  
  Address Melbourne; Australia; August 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCAI  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ SJH2017b Serial 2966  
Permanent link to this record
 

 
Author Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas edit   pdf
url  doi
openurl 
  Title Cutting Sayre's Knot: Reading Scene Text without Segmentation. Application to Utility Meters Type Conference Article
  Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal  
  Volume Issue Pages 97-102  
  Keywords (up) Robust Reading; End-to-end Systems; CNN; Utility Meters  
  Abstract In this paper we present a segmentation-free system for reading text in natural scenes. A CNN architecture is trained in an end-to-end manner, and is able to directly output readings without any explicit text localization step. In order to validate our proposal, we focus on the specific case of reading utility meters. We present our results in a large dataset of images acquired by different users and devices, so text appears in any location, with different sizes, fonts and lengths, and the images present several distortions such as
dirt, illumination highlights or blur.
 
  Address Viena; Austria; April 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.084; 600.121; 600.129 Approved no  
  Call Number Admin @ si @ GRK2018 Serial 3102  
Permanent link to this record
 

 
Author Fernando Vilariño; Ludmila I. Kuncheva; Petia Radeva edit  doi
openurl 
  Title ROC curves and video analysis optimization in intestinal capsule endoscopy Type Journal Article
  Year 2006 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 27 Issue 8 Pages 875–881  
  Keywords (up) ROC curves; Classification; Classifiers ensemble; Detection of intestinal contractions; Imbalanced classes; Wireless capsule endoscopy  
  Abstract Wireless capsule endoscopy involves inspection of hours of video material by a highly qualified professional. Time episodes corresponding to intestinal contractions, which are of interest to the physician constitute about 1% of the video. The problem is to label automatically time episodes containing contractions so that only a fraction of the video needs inspection. As the classes of contraction and non-contraction images in the video are largely imbalanced, ROC curves are used to optimize the trade-off between false positive and false negative rates. Classifier ensemble methods and simple classifiers were examined. Our results reinforce the claims from recent literature that classifier ensemble methods specifically designed for imbalanced problems have substantial advantages over simple classifiers and standard classifier ensembles. By using ROC curves with the bagging ensemble method the inspection time can be drastically reduced at the expense of a small fraction of missed contractions.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference  
  Notes MILAB;MV;SIAI Approved no  
  Call Number BCNPCL @ bcnpcl @ VKR2006; IAM @ iam @ VKR2006 Serial 647  
Permanent link to this record
 

 
Author Josep Llados; Horst Bunke; Enric Marti edit   pdf
url  doi
openurl 
  Title Finding rotational symmetries by cyclic string matching Type Journal Article
  Year 1997 Publication Pattern recognition letters Abbreviated Journal PRL  
  Volume 18 Issue 14 Pages 1435-1442  
  Keywords (up) Rotational symmetry; Reflectional symmetry; String matching  
  Abstract Symmetry is an important shape feature. In this paper, a simple and fast method to detect perfect and distorted rotational symmetries of 2D objects is described. The boundary of a shape is polygonally approximated and represented as a string. Rotational symmetries are found by cyclic string matching between two identical copies of the shape string. The set of minimum cost edit sequences that transform the shape string to a cyclically shifted version of itself define the rotational symmetry and its order. Finally, a modification of the algorithm is proposed to detect reflectional symmetries. Some experimental results are presented to show the reliability of the proposed algorithm  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ LBM1997a Serial 1562  
Permanent link to this record
 

 
Author Josep Llados; Horst Bunke; Enric Marti edit   pdf
openurl 
  Title Structural Recognition of hand drawn floor plans Type Conference Article
  Year 1996 Publication VI National Symposium on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume Issue Pages  
  Keywords (up) Rotational Symmetry; Reflectional Symmetry; String Matching.  
  Abstract A system to recognize hand drawn architectural drawings in a CAD environment has been deve- loped. In this paper we focus on its high level interpretation module. To interpret a floor plan, the system must identify several building elements, whose description is stored in a library of pat- terns, as well as their spatial relationships. We propose a structural approach based on subgraph isomorphism techniques to obtain a high-level interpretation of the document. The vectorized input document and the patterns to be recognized are represented by attributed graphs. Discrete relaxation techniques (AC4 algorithm) have been applied to develop the matching algorithm. The process has been divided in three steps: node labeling, local consistency and global consistency verification. The hand drawn creation causes disturbed line drawings with several accuracy errors, which must be taken into account. Here we have identified them and the AC4 algorithm has been adapted to manage them.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Cordoba Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ LIM1995 Serial 1565  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: