toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Jose Manuel Alvarez; Antonio Lopez edit   pdf
openurl 
  Title Novel Index for Objective Evaluation of Road Detection Algorithms Type Conference Article
  Year 2008 Publication Intelligent Transportation Systems. 11th International IEEE Conference on, Abbreviated Journal  
  Volume Issue Pages 815–820  
  Keywords road detection  
  Abstract  
  Address Beijing (Xina)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ AlL2008 Serial 1074  
Permanent link to this record
 

 
Author Jose Carlos Rubio; Joan Serrat; Antonio Lopez; Daniel Ponsa edit   pdf
openurl 
  Title Multiple-target tracking for the intelligent headlights control Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 903–910  
  Keywords Intelligent Headlights  
  Abstract TA7.4
Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm.
 
  Address Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RSL2010 Serial 1422  
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Vehicle geolocalization based on video synchronization Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1511–1516  
  Keywords video alignment  
  Abstract TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
 
  Address Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference (up) ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DPS2010 Serial 1423  
Permanent link to this record
 

 
Author Ferran Diego; Jose Manuel Alvarez; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Vision-based road detection via on-line video registration Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1135–1140  
  Keywords video alignment; road detection  
  Abstract TB6.2
Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region.
 
  Address Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference (up) ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DAS2010 Serial 1424  
Permanent link to this record
 

 
Author Diego Alejandro Cheda; Daniel Ponsa; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Camera Egomotion Estimation in the ADAS Context Type Conference Article
  Year 2010 Publication 13th International IEEE Annual Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1415–1420  
  Keywords  
  Abstract Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain.  
  Address Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference (up) ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ CPL2010 Serial 1425  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Felipe Lumbreras; Theo Gevers; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Geographic Information for vision-based Road Detection Type Conference Article
  Year 2010 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 621–626  
  Keywords road detection  
  Abstract Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach.  
  Address San Diego; CA; USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) IV  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ ALG2010 Serial 1428  
Permanent link to this record
 

 
Author Diego Cheda; Daniel Ponsa; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Pedestrian Candidates Generation using Monocular Cues Type Conference Article
  Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 7-12  
  Keywords pedestrian detection  
  Abstract Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium  
  Area Expedition Conference (up) IV  
  Notes ADAS Approved no  
  Call Number Admin @ si @ CPL2012c; ADAS @ adas @ cpl2012d Serial 2013  
Permanent link to this record
 

 
Author Naveen Onkarappa; Angel Sappa edit   pdf
doi  isbn
openurl 
  Title An Empirical Study on Optical Flow Accuracy Depending on Vehicle Speed Type Conference Article
  Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 1138-1143  
  Keywords  
  Abstract Driver assistance and safety systems are getting attention nowadays towards automatic navigation and safety. Optical flow as a motion estimation technique has got major roll in making these systems a reality. Towards this, in the current paper, the suitability of polar representation for optical flow estimation in such systems is demonstrated. Furthermore, the influence of individual regularization terms on the accuracy of optical flow on image sequences of different speeds is empirically evaluated. Also a new synthetic dataset of image sequences with different speeds is generated along with the ground-truth optical flow.  
  Address Alcalá de Henares  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium  
  Area Expedition Conference (up) IV  
  Notes ADAS Approved no  
  Call Number Admin @ si @ NaS2012 Serial 2020  
Permanent link to this record
 

 
Author Miguel Oliveira; Angel Sappa; V. Santos edit   pdf
doi  isbn
openurl 
  Title Color Correction for Onboard Multi-camera Systems using 3D Gaussian Mixture Models Type Conference Article
  Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 299-303  
  Keywords  
  Abstract The current paper proposes a novel color correction approach for onboard multi-camera systems. It works by segmenting the given images into several regions. A probabilistic segmentation framework, using 3D Gaussian Mixture Models, is proposed. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. An image data set of road scenarios is used to establish a performance comparison of the proposed method with other seven well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.  
  Address Alcalá de Henares  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium  
  Area Expedition Conference (up) IV  
  Notes ADAS Approved no  
  Call Number Admin @ si @ OSS2012b Serial 2021  
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Antonio Lopez; Javier Marin; Daniel Ponsa edit   pdf
doi  isbn
openurl 
  Title Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection Type Conference Article
  Year 2013 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 467 - 472  
  Keywords Pedestrian Detection; Virtual World; Part based  
  Abstract State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).  
  Address Gold Coast; Australia; June 2013  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2754-1 Medium  
  Area Expedition Conference (up) IV  
  Notes ADAS; 600.054; 600.057 Approved no  
  Call Number XVL2013; ADAS @ adas @ xvl2013a Serial 2214  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: