Home | [201–210] << 211 212 213 214 215 216 217 218 219 220 >> [221–228] |
Records | |||||
---|---|---|---|---|---|
Author | Gabriel Villalonga; Sebastian Ramos; German Ros; David Vazquez; Antonio Lopez | ||||
Title | 3d Pedestrian Detection via Random Forest | Type | Miscellaneous | ||
Year | 2014 | Publication | European Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 231-238 | ||
Keywords | Pedestrian Detection | ||||
Abstract | Our demo focuses on showing the extraordinary performance of our novel 3D pedestrian detector along with its simplicity and real-time capabilities. This detector has been designed for autonomous driving applications, but it can also be applied in other scenarios that cover both outdoor and indoor applications.
Our pedestrian detector is based on the combination of a random forest classifier with HOG-LBP features and the inclusion of a preprocessing stage based on 3D scene information in order to precisely determinate the image regions where the detector should search for pedestrians. This approach ends up in a high accurate system that runs real-time as it is required by many computer vision and robotics applications. |
||||
Address | Zurich; suiza; September 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV-Demo | ||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ VRR2014 | Serial | 2570 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez | ||||
Title | 3D-Guided Multiscale Sliding Window for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 560-568 | |
Keywords | Pedestrian Detection | ||||
Abstract | The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IbPRIA | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVR2015 | Serial | 2585 | ||
Permanent link to this record | |||||
Author | David Geronimo; Angel Sappa; Daniel Ponsa; Antonio Lopez | ||||
Title | 2D-3D based on-board pedestrian detection system | Type | Journal Article | ||
Year | 2010 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 114 | Issue | 5 | Pages | 583–595 |
Keywords | Pedestrian detection; Advanced Driver Assistance Systems; Horizon line; Haar wavelets; Edge orientation histograms | ||||
Abstract | During the next decade, on-board pedestrian detection systems will play a key role in the challenge of increasing traffic safety. The main target of these systems, to detect pedestrians in urban scenarios, implies overcoming difficulties like processing outdoor scenes from a mobile platform and searching for aspect-changing objects in cluttered environments. This makes such systems combine techniques in the state-of-the-art Computer Vision. In this paper we present a three module system based on both 2D and 3D cues. The first module uses 3D information to estimate the road plane parameters and thus select a coherent set of regions of interest (ROIs) to be further analyzed. The second module uses Real AdaBoost and a combined set of Haar wavelets and edge orientation histograms to classify the incoming ROIs as pedestrian or non-pedestrian. The final module loops again with the 3D cue in order to verify the classified ROIs and with the 2D in order to refine the final results. According to the results, the integration of the proposed techniques gives rise to a promising system. | ||||
Address | Computer Vision and Image Understanding (Special Issue on Intelligent Vision Systems), Vol. 114(5):583-595 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1077-3142 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ GSP2010 | Serial | 1341 | ||
Permanent link to this record | |||||
Author | Victor Campmany; Sergio Silva; Antonio Espinosa; Juan Carlos Moure; David Vazquez; Antonio Lopez | ||||
Title | GPU-based pedestrian detection for autonomous driving | Type | Conference Article | ||
Year | 2016 | Publication | 16th International Conference on Computational Science | Abbreviated Journal | |
Volume | 80 | Issue | Pages | 2377-2381 | |
Keywords | Pedestrian detection; Autonomous Driving; CUDA | ||||
Abstract | We propose a real-time pedestrian detection system for the embedded Nvidia Tegra X1 GPU-CPU hybrid platform. The pipeline is composed by the following state-of-the-art algorithms: Histogram of Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) features extracted from the input image; Pyramidal Sliding Window technique for foreground segmentation; and Support Vector Machine (SVM) for classification. Results show a 8x speedup in the target Tegra X1 platform and a better performance/watt ratio than desktop CUDA platforms in study. | ||||
Address | San Diego; CA; USA; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCS | ||
Notes | ADAS; 600.085; 600.082; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ CSE2016 | Serial | 2741 | ||
Permanent link to this record | |||||
Author | Muhammad Anwer Rao; David Vazquez; Antonio Lopez | ||||
Title | Color Contribution to Part-Based Person Detection in Different Types of Scenarios | Type | Conference Article | ||
Year | 2011 | Publication | 14th International Conference on Computer Analysis of Images and Patterns | Abbreviated Journal | |
Volume | 6855 | Issue | II | Pages | 463-470 |
Keywords | Pedestrian Detection; Color | ||||
Abstract | Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution. | ||||
Address | Seville, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Berlin Heidelberg | Editor | P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch |
Language | English | Summary Language | english | Original Title | Color Contribution to Part-Based Person Detection in Different Types of Scenarios |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-23677-8 | Medium | |
Area | Expedition | Conference | CAIP | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ RVL2011b | Serial | 1665 | ||
Permanent link to this record | |||||
Author | Muhammad Anwer Rao; David Vazquez; Antonio Lopez | ||||
Title | Opponent Colors for Human Detection | Type | Conference Article | ||
Year | 2011 | Publication | 5th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 6669 | Issue | Pages | 363-370 | |
Keywords | Pedestrian Detection; Color; Part Based Models | ||||
Abstract | Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper. | ||||
Address | Las Palmas de Gran Canaria. Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Berlin Heidelberg | Editor | J. Vitria; J.M. Sanches; M. Hernandez |
Language | English | Summary Language | English | Original Title | Opponent Colors for Human Detection |
Series Editor | Series Title | Lecture Notes on Computer Science | Abbreviated Series Title | LNCS | |
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-21256-7 | Medium | |
Area | Expedition | Conference | IbPRIA | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ RVL2011a | Serial | 1666 | ||
Permanent link to this record | |||||
Author | Javier Marin; David Vazquez; David Geronimo; Antonio Lopez | ||||
Title | Learning Appearance in Virtual Scenarios for Pedestrian Detection | Type | Conference Article | ||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 137–144 | ||
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract | Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance. | ||||
Address | San Francisco; CA; USA; June 2010 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | English | Original Title | Learning Appearance in Virtual Scenarios for Pedestrian Detection |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ MVG2010 | Serial | 1304 | ||
Permanent link to this record | |||||
Author | David Vazquez | ||||
Title | Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection | Type | Book Whole | ||
Year | 2013 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | 1 | Issue | 1 | Pages | 1-105 |
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract | Pedestrian detection is of paramount interest for many applications, e.g. Advanced Driver Assistance Systems, Intelligent Video Surveillance and Multimedia systems. Most promising pedestrian detectors rely on appearance-based classifiers trained with annotated data. However, the required annotation step represents an intensive and subjective task for humans, what makes worth to minimize their intervention in this process by using computational tools like realistic virtual worlds. The reason to use these kind of tools relies in the fact that they allow the automatic generation of precise and rich annotations of visual information. Nevertheless, the use of this kind of data comes with the following question: can a pedestrian appearance model learnt with virtual-world data work successfully for pedestrian detection in real-world scenarios?. To answer this question, we conduct different experiments that suggest a positive answer. However, the pedestrian classifiers trained with virtual-world data can suffer the so called dataset shift problem as real-world based classifiers does. Accordingly, we have designed different domain adaptation techniques to face this problem, all of them integrated in a same framework (V-AYLA). We have explored different methods to train a domain adapted pedestrian classifiers by collecting a few pedestrian samples from the target domain (real world) and combining them with many samples of the source domain (virtual world). The extensive experiments we present show that pedestrian detectors developed within the V-AYLA framework do achieve domain adaptation. Ideally, we would like to adapt our system without any human intervention. Therefore, as a first proof of concept we also propose an unsupervised domain adaptation technique that avoids human intervention during the adaptation process. To the best of our knowledge, this Thesis work is the first demonstrating adaptation of virtual and real worlds for developing an object detector. Last but not least, we also assessed a different strategy to avoid the dataset shift that consists in collecting real-world samples and retrain with them in such a way that no bounding boxes of real-world pedestrians have to be provided. We show that the generated classifier is competitive with respect to the counterpart trained with samples collected by manually annotating pedestrian bounding boxes. The results presented on this Thesis not only end with a proposal for adapting a virtual-world pedestrian detector to the real world, but also it goes further by pointing out a new methodology that would allow the system to adapt to different situations, which we hope will provide the foundations for future research in this unexplored area. | ||||
Address | Barcelona | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Barcelona | Editor | Antonio Lopez;Daniel Ponsa |
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940530-1-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | adas | Approved | yes | ||
Call Number | ADAS @ adas @ Vaz2013 | Serial | 2276 | ||
Permanent link to this record | |||||
Author | David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa | ||||
Title | Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes | Type | Conference Article | ||
Year | 2013 | Publication | CVPR Workshop on Ground Truth – What is a good dataset? | Abbreviated Journal | |
Volume | Issue | Pages | 706 - 711 | ||
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract | Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs. | ||||
Address | Portland; Oregon; June 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | ADAS; 600.054; 600.057; 601.217 | Approved | no | ||
Call Number | ADAS @ adas @ VXR2013a | Serial | 2219 | ||
Permanent link to this record | |||||
Author | Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa | ||||
Title | Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers | Type | Conference Article | ||
Year | 2013 | Publication | CVPR Workshop on Ground Truth – What is a good dataset? | Abbreviated Journal | |
Volume | Issue | Pages | 688 - 693 | ||
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract | Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%. | ||||
Address | Portland; oregon; June 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | ADAS; 600.054; 600.057; 601.217 | Approved | yes | ||
Call Number | XVR2013; ADAS @ adas @ xvr2013a | Serial | 2220 | ||
Permanent link to this record | |||||
Author | David Vazquez; Antonio Lopez; Daniel Ponsa | ||||
Title | Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection | Type | Conference Article | ||
Year | 2012 | Publication | 21st International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3492 - 3495 | ||
Keywords | Pedestrian Detection; Domain Adaptation; Virtual worlds | ||||
Abstract | Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1). | ||||
Address | Tsukuba Science City, Japan | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Tsukuba Science City, JAPAN | Editor | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1051-4651 | ISBN | 978-1-4673-2216-4 | Medium | |
Area | Expedition | Conference | ICPR | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ VLP2012 | Serial | 1981 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Zhijie Fang; Yainuvis Socarras; Joan Serrat; David Vazquez; Jiaolong Xu; Antonio Lopez | ||||
Title | Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison | Type | Journal Article | ||
Year | 2016 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 16 | Issue | 6 | Pages | 820 |
Keywords | Pedestrian Detection; FIR | ||||
Abstract | Despite all the significant advances in pedestrian detection brought by computer vision for driving assistance, it is still a challenging problem. One reason is the extremely varying lighting conditions under which such a detector should operate, namely day and night time. Recent research has shown that the combination of visible and non-visible imaging modalities may increase detection accuracy, where the infrared spectrum plays a critical role. The goal of this paper is to assess the accuracy gain of different pedestrian models (holistic, part-based, patch-based) when training with images in the far infrared spectrum. Specifically, we want to compare detection accuracy on test images recorded at day and nighttime if trained (and tested) using (a) plain color images, (b) just infrared images and (c) both of them. In order to obtain results for the last item we propose an early fusion approach to combine features from both modalities. We base the evaluation on a new dataset we have built for this purpose as well as on the publicly available KAIST multispectral dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1424-8220 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.085; 600.076; 600.082; 601.281 | Approved | no | ||
Call Number | ADAS @ adas @ GFS2016 | Serial | 2754 | ||
Permanent link to this record | |||||
Author | Victor Campmany; Sergio Silva; Juan Carlos Moure; Toni Espinosa; David Vazquez; Antonio Lopez | ||||
Title | GPU-based pedestrian detection for autonomous driving | Type | Conference Article | ||
Year | 2016 | Publication | GPU Technology Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Pedestrian Detection; GPU | ||||
Abstract | Pedestrian detection for autonomous driving is one of the hardest tasks within computer vision, and involves huge computational costs. Obtaining acceptable real-time performance, measured in frames per second (fps), for the most advanced algorithms is nowadays a hard challenge. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system that includes LBP and HOG as feature descriptors and SVM and Random forest as classifiers. We introduce significant algorithmic adjustments and optimizations to adapt the problem to the NVIDIA GPU architecture. The aim is to deploy a real-time system providing reliable results. | ||||
Address | Silicon Valley; San Francisco; USA; April 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GTC | ||
Notes | ADAS; 600.085; 600.082; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ CSM2016 | Serial | 2737 | ||
Permanent link to this record | |||||
Author | David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin | ||||
Title | Virtual Worlds and Active Learning for Human Detection | Type | Conference Article | ||
Year | 2011 | Publication | 13th International Conference on Multimodal Interaction | Abbreviated Journal | |
Volume | Issue | Pages | 393-400 | ||
Keywords | Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning | ||||
Abstract | Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid. | ||||
Address | Alicante, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | ACM DL | Place of Publication | New York, NY, USA, USA | Editor | |
Language | English | Summary Language | English | Original Title | Virtual Worlds and Active Learning for Human Detection |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-0641-6 | Medium | ||
Area | Expedition | Conference | ICMI | ||
Notes | ADAS | Approved | yes | ||
Call Number | ADAS @ adas @ VLP2011a | Serial | 1683 | ||
Permanent link to this record | |||||
Author | Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez | ||||
Title | Incremental Domain Adaptation of Deformable Part-based Models | Type | Conference Article | ||
Year | 2014 | Publication | 25th British Machine Vision Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Pedestrian Detection; Part-based models; Domain Adaptation | ||||
Abstract | Nowadays, classifiers play a core role in many computer vision tasks. The underlying assumption for learning classifiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classifiers. However, in practice, there are different reasons that can break this constancy assumption. Accordingly, reusing existing classifiers by adapting them from the previous training environment (source domain) to the new testing one (target domain)
is an approach with increasing acceptance in the computer vision community. In this paper we focus on the domain adaptation of deformable part-based models (DPMs) for object detection. In particular, we focus on a relatively unexplored scenario, i.e. incremental domain adaptation for object detection assuming weak-labeling. Therefore, our algorithm is ready to improve existing source-oriented DPM-based detectors as soon as a little amount of labeled target-domain training data is available, and keeps improving as more of such data arrives in a continuous fashion. For achieving this, we follow a multiple instance learning (MIL) paradigm that operates in an incremental per-image basis. As proof of concept, we address the challenging scenario of adapting a DPM-based pedestrian detector trained with synthetic pedestrians to operate in real-world scenarios. The obtained results show that our incremental adaptive models obtain equally good accuracy results as the batch learned models, while being more flexible for handling continuously arriving target-domain data. |
||||
Address | Nottingham; uk; September 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | BMVA Press | Place of Publication | Editor | Valstar, Michel and French, Andrew and Pridmore, Tony | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | BMVC | ||
Notes | ADAS; 600.057; 600.054; 600.076 | Approved | no | ||
Call Number | XRV2014c; ADAS @ adas @ xrv2014c | Serial | 2455 | ||
Permanent link to this record |