toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title 3D Scene Priors for Road Detection Type Conference Article
  Year 2010 Publication (up) 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 57–64  
  Keywords road detection  
  Abstract Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010a Serial 1302  
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa edit  doi
isbn  openurl
  Title Relaxing the 3L Algorithm for an Accurate Implicit Polynomial Fitting Type Conference Article
  Year 2010 Publication (up) 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3066-3072  
  Keywords  
  Abstract This paper presents a novel method to increase the accuracy of linear fitting of implicit polynomials. The proposed method is based on the 3L algorithm philosophy. The novelty lies on the relaxation of the additional constraints, already imposed by the 3L algorithm. Hence, the accuracy of the final solution is increased due to the proper adjustment of the expected values in the aforementioned additional constraints. Although iterative, the proposed approach solves the fitting problem within a linear framework, which is independent of the threshold tuning. Experimental results, both in 2D and 3D, showing improvements in the accuracy of the fitting are presented. Comparisons with both state of the art algorithms and a geometric based one (non-linear fitting), which is used as a ground truth, are provided.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RoS2010a Serial 1303  
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; David Geronimo; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Learning Appearance in Virtual Scenarios for Pedestrian Detection Type Conference Article
  Year 2010 Publication (up) 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 137–144  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title Learning Appearance in Virtual Scenarios for Pedestrian Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ MVG2010 Serial 1304  
Permanent link to this record
 

 
Author David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo edit  doi
isbn  openurl
  Title Fast and Robust Object Segmentation with the Integral Linear Classifier Type Conference Article
  Year 2010 Publication (up) 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1046–1053  
  Keywords  
  Abstract We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ ARL2010a Serial 1311  
Permanent link to this record
 

 
Author German Ros; J. Guerrero; Angel Sappa; Daniel Ponsa; Antonio Lopez edit   pdf
openurl 
  Title Fast and Robust l1-averaging-based Pose Estimation for Driving Scenarios Type Conference Article
  Year 2013 Publication (up) 24th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords SLAM  
  Abstract Robust visual pose estimation is at the core of many computer vision applications, being fundamental for Visual SLAM and Visual Odometry problems. During the last decades, many approaches have been proposed to solve these problems, being RANSAC one of the most accepted and used. However, with the arrival of new challenges, such as large driving scenarios for autonomous vehicles, along with the improvements in the data gathering frameworks, new issues must be considered. One of these issues is the capability of a technique to deal with very large amounts of data while meeting the realtime
constraint. With this purpose in mind, we present a novel technique for the problem of robust camera-pose estimation that is more suitable for dealing with large amount of data, which additionally, helps improving the results. The method is based on a combination of a very fast coarse-evaluation function and a robust ℓ1-averaging procedure. Such scheme leads to high-quality results while taking considerably less time than RANSAC.
Experimental results on the challenging KITTI Vision Benchmark Suite are provided, showing the validity of the proposed approach.
 
  Address Bristol; UK; September 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RGS2013b; ADAS @ adas @ Serial 2274  
Permanent link to this record
 

 
Author Victor Vaquero; German Ros; Francesc Moreno-Noguer; Antonio Lopez; Alberto Sanfeliu edit   pdf
doi  openurl
  Title Joint coarse-and-fine reasoning for deep optical flow Type Conference Article
  Year 2017 Publication (up) 24th International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 2558-2562  
  Keywords  
  Abstract We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.  
  Address Beijing; China; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ VRM2017 Serial 2898  
Permanent link to this record
 

 
Author Xialei Liu; Marc Masana; Luis Herranz; Joost Van de Weijer; Antonio Lopez; Andrew Bagdanov edit   pdf
doi  openurl
  Title Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting Type Conference Article
  Year 2018 Publication (up) 24th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2262-2268  
  Keywords  
  Abstract In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios. Our technique is based on a network reparameterization that approximately diagonalizes the Fisher Information Matrix of the network parameters. This reparameterization takes the form of
a factorized rotation of parameter space which, when used in conjunction with Elastic Weight Consolidation (which assumes a diagonal Fisher Information Matrix), leads to significantly better performance on lifelong learning of sequential tasks. Experimental results on the MNIST, CIFAR-100, CUB-200 and
Stanford-40 datasets demonstrate that we significantly improve the results of standard elastic weight consolidation, and that we obtain competitive results when compared to the state-of-the-art in lifelong learning without forgetting.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes LAMP; ADAS; 601.305; 601.109; 600.124; 600.106; 602.200; 600.120; 600.118 Approved no  
  Call Number Admin @ si @ LMH2018 Serial 3160  
Permanent link to this record
 

 
Author Gema Rotger; Felipe Lumbreras; Francesc Moreno-Noguer; Antonio Agudo edit   pdf
doi  openurl
  Title 2D-to-3D Facial Expression Transfer Type Conference Article
  Year 2018 Publication (up) 24th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2008 - 2013  
  Keywords  
  Abstract Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular
mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic
equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes ADAS; 600.086; 600.130; 600.118 Approved no  
  Call Number Admin @ si @ RLM2018 Serial 3232  
Permanent link to this record
 

 
Author Eugenio Alcala; Laura Sellart; Vicenc Puig; Joseba Quevedo; Jordi Saludes; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title Comparison of two non-linear model-based control strategies for autonomous vehicles Type Conference Article
  Year 2016 Publication (up) 24th Mediterranean Conference on Control and Automation Abbreviated Journal  
  Volume Issue Pages 846-851  
  Keywords Autonomous Driving; Control  
  Abstract This paper presents the comparison of two nonlinear model-based control strategies for autonomous cars. A control oriented model of vehicle based on a bicycle model is used. The two control strategies use a model reference approach. Using this approach, the error dynamics model is developed. Both controllers receive as input the longitudinal, lateral and orientation errors generating as control outputs the steering angle and the velocity of the vehicle. The first control approach is based on a non-linear control law that is designed by means of the Lyapunov direct approach. The second approach is based on a sliding mode-control that defines a set of sliding surfaces over which the error trajectories will converge. The main advantage of the sliding-control technique is the robustness against non-linearities and parametric uncertainties in the model. However, the main drawback of first order sliding mode is the chattering, so it has been implemented a high order sliding mode control. To test and compare the proposed control strategies, different path following scenarios are used in simulation.  
  Address Athens; Greece; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MED  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ ASP2016 Serial 2750  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Incremental Domain Adaptation of Deformable Part-based Models Type Conference Article
  Year 2014 Publication (up) 25th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Pedestrian Detection; Part-based models; Domain Adaptation  
  Abstract Nowadays, classifiers play a core role in many computer vision tasks. The underlying assumption for learning classifiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classifiers. However, in practice, there are different reasons that can break this constancy assumption. Accordingly, reusing existing classifiers by adapting them from the previous training environment (source domain) to the new testing one (target domain)
is an approach with increasing acceptance in the computer vision community. In this paper we focus on the domain adaptation of deformable part-based models (DPMs) for object detection. In particular, we focus on a relatively unexplored scenario, i.e. incremental domain adaptation for object detection assuming weak-labeling. Therefore, our algorithm is ready to improve existing source-oriented DPM-based detectors as soon as a little amount of labeled target-domain training data is available, and keeps improving as more of such data arrives in a continuous fashion. For achieving this, we follow a multiple
instance learning (MIL) paradigm that operates in an incremental per-image basis. As proof of concept, we address the challenging scenario of adapting a DPM-based pedestrian detector trained with synthetic pedestrians to operate in real-world scenarios. The obtained results show that our incremental adaptive models obtain equally good accuracy results as the batch learned models, while being more flexible for handling continuously arriving target-domain data.
 
  Address Nottingham; uk; September 2014  
  Corporate Author Thesis  
  Publisher BMVA Press Place of Publication Editor Valstar, Michel and French, Andrew and Pridmore, Tony  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number XRV2014c; ADAS @ adas @ xrv2014c Serial 2455  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: