toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Hierarchical Adaptive Structural SVM for Domain Adaptation Type Journal Article
  Year 2016 Publication (down) International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 119 Issue 2 Pages 159-178  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract A key topic in classification is the accuracy loss produced when the data distribution in the training (source) domain differs from that in the testing (target) domain. This is being recognized as a very relevant problem for many
computer vision tasks such as image classification, object detection, and object category recognition. In this paper, we present a novel domain adaptation method that leverages multiple target domains (or sub-domains) in a hierarchical adaptation tree. The core idea is to exploit the commonalities and differences of the jointly considered target domains.
Given the relevance of structural SVM (SSVM) classifiers, we apply our idea to the adaptive SSVM (A-SSVM), which only requires the target domain samples together with the existing source-domain classifier for performing the desired adaptation. Altogether, we term our proposal as hierarchical A-SSVM (HA-SSVM).
As proof of concept we use HA-SSVM for pedestrian detection, object category recognition and face recognition. In the former we apply HA-SSVM to the deformable partbased model (DPM) while in the rest HA-SSVM is applied to multi-category classifiers. We will show how HA-SSVM is effective in increasing the detection/recognition accuracy with respect to adaptation strategies that ignore the structure of the target data. Since, the sub-domains of the target data are not always known a priori, we shown how HA-SSVM can incorporate sub-domain discovery for object category recognition.
 
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0920-5691 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number Admin @ si @ XRV2016 Serial 2669  
Permanent link to this record
 

 
Author Adrien Gaidon; Antonio Lopez; Florent Perronnin edit  url
openurl 
  Title The Reasonable Effectiveness of Synthetic Visual Data Type Journal Article
  Year 2018 Publication (down) International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 126 Issue 9 Pages 899–901  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ GLP2018 Serial 3180  
Permanent link to this record
 

 
Author Cesar de Souza; Adrien Gaidon; Yohann Cabon; Naila Murray; Antonio Lopez edit   pdf
doi  openurl
  Title Generating Human Action Videos by Coupling 3D Game Engines and Probabilistic Graphical Models Type Journal Article
  Year 2020 Publication (down) International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 128 Issue Pages 1505–1536  
  Keywords Procedural generation; Human action recognition; Synthetic data; Physics  
  Abstract Deep video action recognition models have been highly successful in recent years but require large quantities of manually-annotated data, which are expensive and laborious to obtain. In this work, we investigate the generation of synthetic training data for video action recognition, as synthetic data have been successfully used to supervise models for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation, physics models and other components of modern game engines. With this model we generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. PHAV contains a total of 39,982 videos, with more than 1000 examples for each of 35 action categories. Our video generation approach is not limited to existing motion capture sequences: 14 of these 35 categories are procedurally-defined synthetic actions. In addition, each video is represented with 6 different data modalities, including RGB, optical flow and pixel-level semantic labels. These modalities are generated almost simultaneously using the Multiple Render Targets feature of modern GPUs. In order to leverage PHAV, we introduce a deep multi-task (i.e. that considers action classes from multiple datasets) representation learning architecture that is able to simultaneously learn from synthetic and real video datasets, even when their action categories differ. Our experiments on the UCF-101 and HMDB-51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance. Our approach also significantly outperforms video representations produced by fine-tuning state-of-the-art unsupervised generative models of videos.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.124; 600.118 Approved no  
  Call Number Admin @ si @ SGC2019 Serial 3303  
Permanent link to this record
 

 
Author Daniel Hernandez; Lukas Schneider; P. Cebrian; A. Espinosa; David Vazquez; Antonio Lopez; Uwe Franke; Marc Pollefeys; Juan Carlos Moure edit   pdf
url  openurl
  Title Slanted Stixels: A way to represent steep streets Type Journal Article
  Year 2019 Publication (down) International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 127 Issue Pages 1643–1658  
  Keywords  
  Abstract This work presents and evaluates a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced in order to significantly reduce the computational complexity of the Stixel algorithm, and then achieve real-time computation capabilities. The idea is to first perform an over-segmentation of the image, discarding the unlikely Stixel cuts, and apply the algorithm only on the remaining Stixel cuts. This work presents a novel over-segmentation strategy based on a fully convolutional network, which outperforms an approach based on using local extrema of the disparity map. We evaluate the proposed methods in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118; 600.124 Approved no  
  Call Number Admin @ si @ HSC2019 Serial 3304  
Permanent link to this record
 

 
Author Antonio Lopez; Joan Serrat; Cristina Cañero; Felipe Lumbreras; T. Graf edit   pdf
doi  openurl
  Title Robust lane markings detection and road geometry computation Type Journal Article
  Year 2010 Publication (down) International Journal of Automotive Technology Abbreviated Journal IJAT  
  Volume 11 Issue 3 Pages 395–407  
  Keywords lane markings  
  Abstract Detection of lane markings based on a camera sensor can be a low-cost solution to lane departure and curve-over-speed warnings. A number of methods and implementations have been reported in the literature. However, reliable detection is still an issue because of cast shadows, worn and occluded markings, variable ambient lighting conditions, for example. We focus on increasing detection reliability in two ways. First, we employed an image feature other than the commonly used edges: ridges, which we claim addresses this problem better. Second, we adapted RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane lines to the image features, based on both ridgeness and ridge orientation. In addition, the model was fitted for the left and right lane lines simultaneously to enforce a consistent result. Four measures of interest for driver assistance applications were directly computed from the fitted parametric model at each frame: lane width, lane curvature, and vehicle yaw angle and lateral offset with regard the lane medial axis. We qualitatively assessed our method in video sequences captured on several road types and under very different lighting conditions. We also quantitatively assessed it on synthetic but realistic video sequences for which road geometry and vehicle trajectory ground truth are known.  
  Address  
  Corporate Author Thesis  
  Publisher The Korean Society of Automotive Engineers Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1229-9138 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ LSC2010 Serial 1300  
Permanent link to this record
 

 
Author Enrique Cabello; Cristina Conde; Angel Serrano; Licesio Rodriguez; David Vazquez edit   pdf
openurl 
  Title Empleo de sistemas biométricos para el reconocimiento de personas en aeropuertos Type Journal Article
  Year 2006 Publication (down) Instituto Universitario de Investigación sobre Seguridad Interior (IUSI 2006) Abbreviated Journal  
  Volume Issue Pages  
  Keywords Surveillance; Face detection; Face recognition  
  Abstract El presente proyecto se desarrolló a lo largo del año 2005, probando un prototipo de un sistema de verificación facial con imágenes extraídas de las cámaras de video vigilancia del aeropuerto de Barajas. Se diseñaron varios experimentos, agrupados en dos clases. En el primer tipo, el sistema es entrenado con imágenes obtenidas en condiciones de laboratorio y luego probado con imágenes extraídas de las cámaras de video vigilancia del aeropuerto de Barajas. En el segundo caso, tanto las imágenes de entrenamiento como las de prueba corresponden a imágenes extraídas de Barajas. Se ha desarrollado un sistema completo, que incluye adquisición y digitalización de las imágenes, localización y recorte de las caras en escena, verificación de sujetos y obtención de resultados. Los resultados muestran, que, en general, un sistema de verificación facial basado en imágenes puede ser una ayuda a un operario que deba estar vigilando amplias zonas.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes invisible;ADAS Approved no  
  Call Number ADAS @ adas @ CCS2006a Serial 1672  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa edit  doi
openurl 
  Title Multimodal Inverse Perspective Mapping Type Journal Article
  Year 2015 Publication (down) Information Fusion Abbreviated Journal IF  
  Volume 24 Issue Pages 108–121  
  Keywords Inverse perspective mapping; Multimodal sensor fusion; Intelligent vehicles  
  Abstract Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ OSS2015c Serial 2532  
Permanent link to this record
 

 
Author Fadi Dornaika; Angel Sappa edit  url
doi  openurl
  Title A Featureless and Stochastic Approach to On-board Stereo Vision System Pose Type Journal Article
  Year 2009 Publication (down) Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 27 Issue 9 Pages 1382–1393  
  Keywords On-board stereo vision system; Pose estimation; Featureless approach; Particle filtering; Image warping  
  Abstract This paper presents a direct and stochastic technique for real-time estimation of on-board stereo head’s position and orientation. Unlike existing works which rely on feature extraction either in the image domain or in 3D space, our proposed approach directly estimates the unknown parameters from the stream of stereo pairs’ brightness. The pose parameters are tracked using the particle filtering framework which implicitly enforces the smoothness constraints on the estimated parameters. The proposed technique can be used with a driver assistance applications as well as with augmented reality applications. Extended experiments on urban environments with different road geometries are presented. Comparisons with a 3D data-based approach are presented. Moreover, we provide a performance study aiming at evaluating the accuracy of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DoS2009b Serial 1152  
Permanent link to this record
 

 
Author Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title An Iterative Multiresolution Scheme for SFM with Missing Data: single and multiple object scenes Type Journal Article
  Year 2010 Publication (down) Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 28 Issue 1 Pages 164-176  
  Keywords  
  Abstract Most of the techniques proposed for tackling the Structure from Motion problem (SFM) cannot deal with high percentages of missing data in the matrix of trajectories. Furthermore, an additional problem should be faced up when working with multiple object scenes: the rank of the matrix of trajectories should be estimated. This paper presents an iterative multiresolution scheme for SFM with missing data to be used in both the single and multiple object cases. The proposed scheme aims at recovering missing entries in the original input matrix. The objective is to improve the results by applying a factorization technique to the partially or totally filled in matrix instead of to the original input one. Experimental results obtained with synthetic and real data sequences, containing single and multiple objects, are presented to show the viability of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0262-8856 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ JSL2010 Serial 1278  
Permanent link to this record
 

 
Author Antonio Lopez; Gabriel Villalonga; Laura Sellart; German Ros; David Vazquez; Jiaolong Xu; Javier Marin; Azadeh S. Mozafari edit   pdf
url  openurl
  Title Training my car to see using virtual worlds Type Journal Article
  Year 2017 Publication (down) Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 38 Issue Pages 102-118  
  Keywords  
  Abstract Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ LVS2017 Serial 2985  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: