toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Victor Campmany; Sergio Silva; Antonio Espinosa; Juan Carlos Moure; David Vazquez; Antonio Lopez edit   pdf
url  openurl
  Title GPU-based pedestrian detection for autonomous driving Type Conference Article
  Year 2016 Publication 16th International Conference on Computational Science Abbreviated Journal  
  Volume 80 Issue Pages 2377-2381  
  Keywords (up) Pedestrian detection; Autonomous Driving; CUDA  
  Abstract We propose a real-time pedestrian detection system for the embedded Nvidia Tegra X1 GPU-CPU hybrid platform. The pipeline is composed by the following state-of-the-art algorithms: Histogram of Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) features extracted from the input image; Pyramidal Sliding Window technique for foreground segmentation; and Support Vector Machine (SVM) for classification. Results show a 8x speedup in the target Tegra X1 platform and a better performance/watt ratio than desktop CUDA platforms in study.  
  Address San Diego; CA; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCS  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ CSE2016 Serial 2741  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; David Vazquez; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Color Contribution to Part-Based Person Detection in Different Types of Scenarios Type Conference Article
  Year 2011 Publication 14th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal  
  Volume 6855 Issue II Pages 463-470  
  Keywords (up) Pedestrian Detection; Color  
  Abstract Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution.  
  Address Seville, Spain  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Berlin Heidelberg Editor P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch  
  Language English Summary Language english Original Title Color Contribution to Part-Based Person Detection in Different Types of Scenarios  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-23677-8 Medium  
  Area Expedition Conference CAIP  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RVL2011b Serial 1665  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; David Vazquez; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Opponent Colors for Human Detection Type Conference Article
  Year 2011 Publication 5th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 6669 Issue Pages 363-370  
  Keywords (up) Pedestrian Detection; Color; Part Based Models  
  Abstract Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper.  
  Address Las Palmas de Gran Canaria. Spain  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Berlin Heidelberg Editor J. Vitria; J.M. Sanches; M. Hernandez  
  Language English Summary Language English Original Title Opponent Colors for Human Detection  
  Series Editor Series Title Lecture Notes on Computer Science Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-21256-7 Medium  
  Area Expedition Conference IbPRIA  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RVL2011a Serial 1666  
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; David Geronimo; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Learning Appearance in Virtual Scenarios for Pedestrian Detection Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 137–144  
  Keywords (up) Pedestrian Detection; Domain Adaptation  
  Abstract Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title Learning Appearance in Virtual Scenarios for Pedestrian Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ MVG2010 Serial 1304  
Permanent link to this record
 

 
Author David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa edit   pdf
doi  openurl
  Title Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes Type Conference Article
  Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal  
  Volume Issue Pages 706 - 711  
  Keywords (up) Pedestrian Detection; Domain Adaptation  
  Abstract Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs.  
  Address Portland; Oregon; June 2013  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.054; 600.057; 601.217 Approved no  
  Call Number ADAS @ adas @ VXR2013a Serial 2219  
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa edit   pdf
doi  openurl
  Title Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers Type Conference Article
  Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal  
  Volume Issue Pages 688 - 693  
  Keywords (up) Pedestrian Detection; Domain Adaptation  
  Abstract Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.  
  Address Portland; oregon; June 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.054; 600.057; 601.217 Approved yes  
  Call Number XVR2013; ADAS @ adas @ xvr2013a Serial 2220  
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez; Daniel Ponsa edit   pdf
isbn  openurl
  Title Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3492 - 3495  
  Keywords (up) Pedestrian Detection; Domain Adaptation; Virtual worlds  
  Abstract Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1).  
  Address Tsukuba Science City, Japan  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Tsukuba Science City, JAPAN Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium  
  Area Expedition Conference ICPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ VLP2012 Serial 1981  
Permanent link to this record
 

 
Author Victor Campmany; Sergio Silva; Juan Carlos Moure; Toni Espinosa; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title GPU-based pedestrian detection for autonomous driving Type Conference Article
  Year 2016 Publication GPU Technology Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords (up) Pedestrian Detection; GPU  
  Abstract Pedestrian detection for autonomous driving is one of the hardest tasks within computer vision, and involves huge computational costs. Obtaining acceptable real-time performance, measured in frames per second (fps), for the most advanced algorithms is nowadays a hard challenge. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system that includes LBP and HOG as feature descriptors and SVM and Random forest as classifiers. We introduce significant algorithmic adjustments and optimizations to adapt the problem to the NVIDIA GPU architecture. The aim is to deploy a real-time system providing reliable results.  
  Address Silicon Valley; San Francisco; USA; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GTC  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ CSM2016 Serial 2737  
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin edit   pdf
doi  isbn
openurl 
  Title Virtual Worlds and Active Learning for Human Detection Type Conference Article
  Year 2011 Publication 13th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 393-400  
  Keywords (up) Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning  
  Abstract Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.  
  Address Alicante, Spain  
  Corporate Author Thesis  
  Publisher ACM DL Place of Publication New York, NY, USA, USA Editor  
  Language English Summary Language English Original Title Virtual Worlds and Active Learning for Human Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-0641-6 Medium  
  Area Expedition Conference ICMI  
  Notes ADAS Approved yes  
  Call Number ADAS @ adas @ VLP2011a Serial 1683  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Incremental Domain Adaptation of Deformable Part-based Models Type Conference Article
  Year 2014 Publication 25th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords (up) Pedestrian Detection; Part-based models; Domain Adaptation  
  Abstract Nowadays, classifiers play a core role in many computer vision tasks. The underlying assumption for learning classifiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classifiers. However, in practice, there are different reasons that can break this constancy assumption. Accordingly, reusing existing classifiers by adapting them from the previous training environment (source domain) to the new testing one (target domain)
is an approach with increasing acceptance in the computer vision community. In this paper we focus on the domain adaptation of deformable part-based models (DPMs) for object detection. In particular, we focus on a relatively unexplored scenario, i.e. incremental domain adaptation for object detection assuming weak-labeling. Therefore, our algorithm is ready to improve existing source-oriented DPM-based detectors as soon as a little amount of labeled target-domain training data is available, and keeps improving as more of such data arrives in a continuous fashion. For achieving this, we follow a multiple
instance learning (MIL) paradigm that operates in an incremental per-image basis. As proof of concept, we address the challenging scenario of adapting a DPM-based pedestrian detector trained with synthetic pedestrians to operate in real-world scenarios. The obtained results show that our incremental adaptive models obtain equally good accuracy results as the batch learned models, while being more flexible for handling continuously arriving target-domain data.
 
  Address Nottingham; uk; September 2014  
  Corporate Author Thesis  
  Publisher BMVA Press Place of Publication Editor Valstar, Michel and French, Andrew and Pridmore, Tony  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number XRV2014c; ADAS @ adas @ xrv2014c Serial 2455  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: