toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Victor Campmany; Sergio Silva; Juan Carlos Moure; Toni Espinosa; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title GPU-based pedestrian detection for autonomous driving Type Conference Article
  Year 2016 Publication GPU Technology Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Pedestrian Detection; GPU  
  Abstract Pedestrian detection for autonomous driving is one of the hardest tasks within computer vision, and involves huge computational costs. Obtaining acceptable real-time performance, measured in frames per second (fps), for the most advanced algorithms is nowadays a hard challenge. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system that includes LBP and HOG as feature descriptors and SVM and Random forest as classifiers. We introduce significant algorithmic adjustments and optimizations to adapt the problem to the NVIDIA GPU architecture. The aim is to deploy a real-time system providing reliable results.  
  Address (down) Silicon Valley; San Francisco; USA; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GTC  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ CSM2016 Serial 2737  
Permanent link to this record
 

 
Author Daniel Hernandez; Juan Carlos Moure; Toni Espinosa; Alejandro Chacon; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title Real-time 3D Reconstruction for Autonomous Driving via Semi-Global Matching Type Conference Article
  Year 2016 Publication GPU Technology Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Stereo; Autonomous Driving; GPU; 3d reconstruction  
  Abstract Robust and dense computation of depth information from stereo-camera systems is a computationally demanding requirement for real-time autonomous driving. Semi-Global Matching (SGM) [1] approximates heavy-computation global algorithms results but with lower computational complexity, therefore it is a good candidate for a real-time implementation. SGM minimizes energy along several 1D paths across the image. The aim of this work is to provide a real-time system producing reliable results on energy-efficient hardware. Our design runs on a NVIDIA Titan X GPU at 104.62 FPS and on a NVIDIA Drive PX at 6.7 FPS, promising for real-time platforms  
  Address (down) Silicon Valley; San Francisco; USA; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GTC  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ HME2016 Serial 2738  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; David Vazquez; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Color Contribution to Part-Based Person Detection in Different Types of Scenarios Type Conference Article
  Year 2011 Publication 14th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal  
  Volume 6855 Issue II Pages 463-470  
  Keywords Pedestrian Detection; Color  
  Abstract Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution.  
  Address (down) Seville, Spain  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Berlin Heidelberg Editor P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch  
  Language English Summary Language english Original Title Color Contribution to Part-Based Person Detection in Different Types of Scenarios  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-23677-8 Medium  
  Area Expedition Conference CAIP  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RVL2011b Serial 1665  
Permanent link to this record
 

 
Author Naveen Onkarappa; Angel Sappa edit  doi
isbn  openurl
  Title Space Variant Representations for Mobile Platform Vision Applications Type Conference Article
  Year 2011 Publication 14th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal  
  Volume 6855 Issue II Pages 146-154  
  Keywords  
  Abstract The log-polar space variant representation, motivated by biological vision, has been widely studied in the literature. Its data reduction and invariance properties made it useful in many vision applications. However, due to its nature, it fails in preserving features in the periphery. In the current work, as an attempt to overcome this problem, we propose a novel space-variant representation. It is evaluated and proved to be better than the log-polar representation in preserving the peripheral information, crucial for on-board mobile vision applications. The evaluation is performed by comparing log-polar and the proposed representation once they are used for estimating dense optical flow.  
  Address (down) Seville, Spain  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-23677-8 Medium  
  Area Expedition Conference CAIP  
  Notes ADAS Approved no  
  Call Number NaS2011; ADAS @ adas @ Serial 1686  
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Debora Gil; David Roche; Monica M. S. Matsumoto; Sergio S. Furuie edit   pdf
url  openurl
  Title Inferring the Performance of Medical Imaging Algorithms Type Conference Article
  Year 2011 Publication 14th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal  
  Volume 6854 Issue Pages 520-528  
  Keywords Validation, Statistical Inference, Medical Imaging Algorithms.  
  Abstract Evaluation of the performance and limitations of medical imaging algorithms is essential to estimate their impact in social, economic or clinical aspects. However, validation of medical imaging techniques is a challenging task due to the variety of imaging and clinical problems involved, as well as, the difficulties for systematically extracting a reliable solely ground truth. Although specific validation protocols are reported in any medical imaging paper, there are still two major concerns: definition of standardized methodologies transversal to all problems and generalization of conclusions to the whole clinical data set.
We claim that both issues would be fully solved if we had a statistical model relating ground truth and the output of computational imaging techniques. Such a statistical model could conclude to what extent the algorithm behaves like the ground truth from the analysis of a sampling of the validation data set. We present a statistical inference framework reporting the agreement and describing the relationship of two quantities. We show its transversality by applying it to validation of two different tasks: contour segmentation and landmark correspondence.
 
  Address (down) Sevilla  
  Corporate Author Thesis  
  Publisher Springer-Verlag Berlin Heidelberg Place of Publication Berlin Editor Pedro Real; Daniel Diaz-Pernil; Helena Molina-Abril; Ainhoa Berciano; Walter Kropatsch  
  Language Summary Language Original Title  
  Series Editor Series Title L Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CAIP  
  Notes IAM; ADAS Approved no  
  Call Number IAM @ iam @ HGR2011 Serial 1676  
Permanent link to this record
 

 
Author Hamed H. Aghdam; Abel Gonzalez-Garcia; Joost Van de Weijer; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Active Learning for Deep Detection Neural Networks Type Conference Article
  Year 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 3672-3680  
  Keywords  
  Abstract The cost of drawing object bounding boxes (ie labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting only those images that are informative to improve the detection network accuracy. In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks. We propose a new image-level scoring process to rank unlabeled images for their automatic selection, which clearly outperforms classical scores. The proposed method can be applied to videos and sets of still images. In the former case, temporal selection rules can complement our scoring process. As a relevant use case, we extensively study the performance of our method on the task of pedestrian detection. Overall, the experiments show that the proposed method performs better than random selection.  
  Address (down) Seul; Korea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes ADAS; LAMP; 600.124; 600.109; 600.141; 600.120; 600.118 Approved no  
  Call Number Admin @ si @ AGW2019 Serial 3321  
Permanent link to this record
 

 
Author Felipe Codevilla; Eder Santana; Antonio Lopez; Adrien Gaidon edit   pdf
url  doi
openurl 
  Title Exploring the Limitations of Behavior Cloning for Autonomous Driving Type Conference Article
  Year 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 9328-9337  
  Keywords  
  Abstract Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, executing complex lateral and longitudinal maneuvers, even in unseen environments, without being explicitly programmed to do so. However, we confirm some limitations of the behavior cloning approach: some well-known limitations (eg, dataset bias and overfitting), new generalization issues (eg, dynamic objects and the lack of a causal modeling), and training instabilities, all requiring further research before behavior cloning can graduate to real-world driving. The code, dataset, benchmark, and agent studied in this paper can be found at github.  
  Address (down) Seul; Korea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes ADAS; 600.124; 600.118 Approved no  
  Call Number Admin @ si @ CSL2019 Serial 3322  
Permanent link to this record
 

 
Author Javad Zolfaghari Bengar; Abel Gonzalez-Garcia; Gabriel Villalonga; Bogdan Raducanu; Hamed H. Aghdam; Mikhail Mozerov; Antonio Lopez; Joost Van de Weijer edit   pdf
url  doi
openurl 
  Title Temporal Coherence for Active Learning in Videos Type Conference Article
  Year 2019 Publication IEEE International Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 914-923  
  Keywords  
  Abstract Autonomous driving systems require huge amounts of data to train. Manual annotation of this data is time-consuming and prohibitively expensive since it involves human resources. Therefore, active learning emerged as an alternative to ease this effort and to make data annotation more manageable. In this paper, we introduce a novel active learning approach for object detection in videos by exploiting temporal coherence. Our active learning criterion is based on the estimated number of errors in terms of false positives and false negatives. The detections obtained by the object detector are used to define the nodes of a graph and tracked forward and backward to temporally link the nodes. Minimizing an energy function defined on this graphical model provides estimates of both false positives and false negatives. Additionally, we introduce a synthetic video dataset, called SYNTHIA-AL, specially designed to evaluate active learning for video object detection in road scenes. Finally, we show that our approach outperforms active learning baselines tested on two datasets.  
  Address (down) Seul; Corea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes LAMP; ADAS; 600.124; 602.200; 600.118; 600.120; 600.141 Approved no  
  Call Number Admin @ si @ ZGV2019 Serial 3294  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection Type Conference Article
  Year 2015 Publication IEEE Intelligent Vehicles Symposium IV2015 Abbreviated Journal  
  Volume Issue Pages 356-361  
  Keywords Pedestrian Detection  
  Abstract Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy.  
  Address (down) Seoul; Corea; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IV  
  Notes ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number ADAS @ adas @ GVX2015 Serial 2625  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Sebastian Ramos; David Vazquez; Antonio Lopez; Jaume Amores edit   pdf
doi  openurl
  Title Spatiotemporal Stacked Sequential Learning for Pedestrian Detection Type Conference Article
  Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume Issue Pages 3-12  
  Keywords SSL; Pedestrian Detection  
  Abstract Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.  
  Address (down) Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IbPRIA  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number GRV2015; ADAS @ adas @ GRV2015 Serial 2454  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: