toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Joan Serrat; Ferran Diego; Jose Manuel Alvarez; Felipe Lumbreras edit  openurl
  Title Alignment of Videos Recorded from Moving Vehicles Type Conference Article
  Year 2007 Publication in 14th International Conference on Image Analysis and Processing, Abbreviated Journal  
  Volume Issue Pages 512–517  
  Keywords  
  Abstract  
  Address (down) Modena (Italia)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SDA2007 Serial 879  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Learning Photometric Invariance from Diversified Color Model Ensembles Type Conference Article
  Year 2009 Publication 22nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 565–572  
  Keywords road detection  
  Abstract Color is a powerful visual cue for many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions affecting negatively the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, those reflection models might be too restricted to model real-world scenes in which different reflectance mechanisms may hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is taken on input composed of both color variants and invariants. Then, the proposed method combines and weights these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, the fusion method uses a multi-view approach to minimize the estimation error. In this way, the method is robust to data uncertainty and produces properly diversified color invariant ensembles. Experiments are conducted on three different image datasets to validate the method. From the theoretical and experimental results, it is concluded that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning. Further, the method outperforms state-of- the-art detection techniques in the field of object, skin and road recognition.  
  Address (down) Miami (USA)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-3992-8 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2009 Serial 1169  
Permanent link to this record
 

 
Author Youssef El Rhabi; Simon Loic; Brun Luc; Josep Llados; Felipe Lumbreras edit  doi
openurl 
  Title Information Theoretic Rotationwise Robust Binary Descriptor Learning Type Conference Article
  Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal  
  Volume Issue Pages 368-378  
  Keywords  
  Abstract In this paper, we propose a new data-driven approach for binary descriptor selection. In order to draw a clear analysis of common designs, we present a general information-theoretic selection paradigm. It encompasses several standard binary descriptor construction schemes, including a recent state-of-the-art one named BOLD. We pursue the same endeavor to increase the stability of the produced descriptors with respect to rotations. To achieve this goal, we have designed a novel offline selection criterion which is better adapted to the online matching procedure. The effectiveness of our approach is demonstrated on two standard datasets, where our descriptor is compared to BOLD and to several classical descriptors. In particular, it emerges that our approach can reproduce equivalent if not better performance as BOLD while relying on twice shorter descriptors. Such an improvement can be influential for real-time applications.  
  Address (down) Mérida; Mexico; November 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference S+SSPR  
  Notes DAG; ADAS; 600.097; 600.086 Approved no  
  Call Number Admin @ si @ RLL2016 Serial 2871  
Permanent link to this record
 

 
Author Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo edit  doi
openurl 
  Title Single view facial hair 3D reconstruction Type Conference Article
  Year 2019 Publication 9th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 11867 Issue Pages 423-436  
  Keywords 3D Vision; Shape Reconstruction; Facial Hair Modeling  
  Abstract n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches.  
  Address (down) Madrid; July 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IbPRIA  
  Notes ADAS; 600.086; 600.130; 600.122 Approved no  
  Call Number Admin @ si @ Serial 3707  
Permanent link to this record
 

 
Author Joan Serrat; J. Argemi; Juan J. Villanueva edit  openurl
  Title Automatization of TW2 method using a knowledge-based image analysis system. Type Conference Article
  Year 1991 Publication VIth International Congress of Auxology. Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) Madrid  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SAV1991 Serial 259  
Permanent link to this record
 

 
Author Jose Carlos Rubio; Joan Serrat; Antonio Lopez; Daniel Ponsa edit   pdf
openurl 
  Title Multiple-target tracking for the intelligent headlights control Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 903–910  
  Keywords Intelligent Headlights  
  Abstract TA7.4
Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm.
 
  Address (down) Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RSL2010 Serial 1422  
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Vehicle geolocalization based on video synchronization Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1511–1516  
  Keywords video alignment  
  Abstract TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
 
  Address (down) Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DPS2010 Serial 1423  
Permanent link to this record
 

 
Author Ferran Diego; Jose Manuel Alvarez; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Vision-based road detection via on-line video registration Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1135–1140  
  Keywords video alignment; road detection  
  Abstract TB6.2
Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region.
 
  Address (down) Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DAS2010 Serial 1424  
Permanent link to this record
 

 
Author Diego Alejandro Cheda; Daniel Ponsa; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Camera Egomotion Estimation in the ADAS Context Type Conference Article
  Year 2010 Publication 13th International IEEE Annual Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1415–1420  
  Keywords  
  Abstract Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain.  
  Address (down) Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ CPL2010 Serial 1425  
Permanent link to this record
 

 
Author Daniel Hernandez; Lukas Schneider; Antonio Espinosa; David Vazquez; Antonio Lopez; Uwe Franke; Marc Pollefeys; Juan C. Moure edit   pdf
openurl 
  Title Slanted Stixels: Representing San Francisco's Steepest Streets} Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this work we present a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced that uses an extremely efficient over-segmentation. In doing so, the computational complexity of the Stixel inference algorithm is reduced significantly, achieving real-time computation capabilities with only a slight drop in accuracy. We evaluate the proposed approach in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.  
  Address (down) London; uk; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.118 Approved no  
  Call Number ADAS @ adas @ HSE2017a Serial 2945  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: