toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Cesar de Souza; Adrien Gaidon; Yohann Cabon; Antonio Lopez edit   pdf
doi  openurl
  Title Procedural Generation of Videos to Train Deep Action Recognition Networks Type Conference Article
  Year 2017 Publication 30th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2594-2604  
  Keywords  
  Abstract Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections. In this work, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for ”Procedural Human Action Videos”. It contains a total of 39, 982 videos, with more than 1, 000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We introduce a deep multi-task representation learning architecture to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF101 and HMDB51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, significantly
outperforming fine-tuning state-of-the-art unsupervised generative models of videos.
 
  Address Honolulu; Hawaii; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes (down) ADAS; 600.076; 600.085; 600.118 Approved no  
  Call Number Admin @ si @ SGC2017 Serial 3051  
Permanent link to this record
 

 
Author Cesar de Souza; Adrien Gaidon; Eleonora Vig; Antonio Lopez edit   pdf
doi  openurl
  Title Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition Type Conference Article
  Year 2016 Publication 14th European Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 697-716  
  Keywords  
  Abstract Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.  
  Address Amsterdam; The Netherlands; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCV  
  Notes (down) ADAS; 600.076; 600.085 Approved no  
  Call Number Admin @ si @ SGV2016 Serial 2824  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection Type Conference Article
  Year 2015 Publication IEEE Intelligent Vehicles Symposium IV2015 Abbreviated Journal  
  Volume Issue Pages 356-361  
  Keywords Pedestrian Detection  
  Abstract Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy.  
  Address Seoul; Corea; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IV  
  Notes (down) ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number ADAS @ adas @ GVX2015 Serial 2625  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title 3D-Guided Multiscale Sliding Window for Pedestrian Detection Type Conference Article
  Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 560-568  
  Keywords Pedestrian Detection  
  Abstract The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IbPRIA  
  Notes (down) ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number ADAS @ adas @ GVR2015 Serial 2585  
Permanent link to this record
 

 
Author German Ros; Sebastian Ramos; Manuel Granados; Amir Bakhtiary; David Vazquez; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Vision-based Offline-Online Perception Paradigm for Autonomous Driving Type Conference Article
  Year 2015 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 231 - 238  
  Keywords Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation  
  Abstract Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.  
  Address Hawaii; January 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference WACV  
  Notes (down) ADAS; 600.076 Approved no  
  Call Number ADAS @ adas @ RRG2015 Serial 2499  
Permanent link to this record
 

 
Author M. Cruz; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa edit  openurl
  Title Cross-spectral image registration and fusion: an evaluation study Type Conference Article
  Year 2015 Publication 2nd International Conference on Machine Vision and Machine Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords multispectral imaging; image registration; data fusion; infrared and visible spectra  
  Abstract This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different
spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
 
  Address Barcelona; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MVML  
  Notes (down) ADAS; 600.076 Approved no  
  Call Number Admin @ si @ CAV2015 Serial 2629  
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; Angel Sappa; Ricardo Toledo edit  url
doi  openurl
  Title LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations Type Conference Article
  Year 2015 Publication 22th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 178 - 181  
  Keywords  
  Abstract  
  Address Quebec; Canada; September 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes (down) ADAS; 600.076 Approved no  
  Call Number Admin @ si @ AST2015 Serial 2630  
Permanent link to this record
 

 
Author Dennis G.Romero; Anselmo Frizera; Angel Sappa; Boris X. Vintimilla; Teodiano F.Bastos edit   pdf
url  doi
isbn  openurl
  Title A predictive model for human activity recognition by observing actions and context Type Conference Article
  Year 2015 Publication Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 Abbreviated Journal  
  Volume 9386 Issue Pages 323-333  
  Keywords  
  Abstract This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.  
  Address Catania; Italy; October 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-25902-4 Medium  
  Area Expedition Conference ACIVS  
  Notes (down) ADAS; 600.076 Approved no  
  Call Number Admin @ si @ RFS2015 Serial 2661  
Permanent link to this record
 

 
Author Miguel Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel Sappa; A. Tom edit   pdf
url  doi
openurl 
  Title Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains Type Conference Article
  Year 2015 Publication International Conference on Intelligent Robots and Systems Abbreviated Journal  
  Volume Issue Pages 2488 - 2495  
  Keywords Visual Learning; Computer Vision; Autonomous Agents  
  Abstract In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks.  
  Address Hamburg; Germany; October 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IROS  
  Notes (down) ADAS; 600.076 Approved no  
  Call Number Admin @ si @ OSL2015 Serial 2664  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos;David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Cost-sensitive Structured SVM for Multi-category Domain Adaptation Type Conference Article
  Year 2014 Publication 22nd International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3886 - 3891  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract Domain adaptation addresses the problem of accuracy drop that a classifier may suffer when the training data (source domain) and the testing data (target domain) are drawn from different distributions. In this work, we focus on domain adaptation for structured SVM (SSVM). We propose a cost-sensitive domain adaptation method for SSVM, namely COSS-SSVM. In particular, during the re-training of an adapted classifier based on target and source data, the idea that we explore consists in introducing a non-zero cost even for correctly classified source domain samples. Eventually, we aim to learn a more targetoriented classifier by not rewarding (zero loss) properly classified source-domain training samples. We assess the effectiveness of COSS-SSVM on multi-category object recognition.  
  Address Stockholm; Sweden; August 2014  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN Medium  
  Area Expedition Conference ICPR  
  Notes (down) ADAS; 600.057; 600.054; 601.217; 600.076 Approved no  
  Call Number ADAS @ adas @ XRV2014a Serial 2434  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: