|   | 
Details
   web
Records
Author Sergio Vera; Debora Gil; Agnes Borras; Marius George Linguraru; Miguel Angel Gonzalez Ballester
Title Geometric Steerable Medial Maps Type Journal Article
Year 2013 Publication Machine Vision and Applications Abbreviated Journal MVA
Volume 24 Issue 6 Pages 1255-1266
Keywords Medial Representations ,Medial Manifolds Comparation , Surface , Reconstruction
Abstract In order to provide more intuitive and easily interpretable representations of complex shapes/organs, medial manifolds should reach a compromise between simplicity in geometry and capability for restoring the anatomy/shape of the organ/volume. Existing morphological methods show excellent results when applied to 2D objects, but their quality drops across dimensions.
This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial manifolds that avoids degenerated medial axis segments. Second, we introduce a continuous operator for accurate and efficient computation of medial structures of arbitrary dimension. We evaluate quantitatively the performance of our method with respect to existing approaches, by applying them to syn- thetic shapes of known medial geometry. We also show its higher performance for medical imaging applications in terms of simplicity of medial structures and capability for reconstructing the anatomical volume.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor Mubarak Shah
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0932-8092 ISBN Medium
Area Expedition Conference
Notes IAM; 605.203; 600.060; 600.044 Approved no
Call Number (up) IAM @ iam @ VGB2013 Serial 2192
Permanent link to this record
 

 
Author Francisco Javier Orozco; Ognjen Rudovic; Jordi Gonzalez; Maja Pantic
Title Hierarchical On-line Appearance-Based Tracking for 3D Head Pose, Eyebrows, Lips, Eyelids and Irises Type Journal Article
Year 2013 Publication Image and Vision Computing Abbreviated Journal IMAVIS
Volume 31 Issue 4 Pages 322-340
Keywords On-line appearance models; Levenberg–Marquardt algorithm; Line-search optimization; 3D face tracking; Facial action tracking; Eyelid tracking; Iris tracking
Abstract In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 605.203; 302.012; 302.018; 600.049 Approved no
Call Number (up) ORG2013 Serial 2221
Permanent link to this record
 

 
Author Daniel Sanchez; J.C.Ortega; Miguel Angel Bautista
Title Human Body Segmentation with Multi-limb Error-Correcting Output Codes Detection and Graph Cuts Optimization Type Conference Article
Year 2013 Publication 6th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 7887 Issue Pages 50-58
Keywords Human Body Segmentation; Error-Correcting Output Codes; Cascade of Classifiers; Graph Cuts
Abstract Human body segmentation is a hard task because of the high variability in appearance produced by changes in the point of view, lighting conditions, and number of articulations of the human body. In this paper, we propose a two-stage approach for the segmentation of the human body. In a first step, a set of human limbs are described, normalized to be rotation invariant, and trained using cascade of classifiers to be split in a tree structure way. Once the tree structure is trained, it is included in a ternary Error-Correcting Output Codes (ECOC) framework. This first classification step is applied in a windowing way on a new test image, defining a body-like probability map, which is used as an initialization of a GMM color modelling and binary Graph Cuts optimization procedure. The proposed methodology is tested in a novel limb-labelled data set. Results show performance improvements of the novel approach in comparison to classical cascade of classifiers and human detector-based Graph Cuts segmentation approaches.
Address Madeira; Portugal; June 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-38627-5 Medium
Area Expedition Conference IbPRIA
Notes HUPBA Approved no
Call Number (up) SOB2013 Serial 2250
Permanent link to this record
 

 
Author H. Emrah Tasli; Jan van Gemert; Theo Gevers
Title Spot the differences: from a photograph burst to the single best picture Type Conference Article
Year 2013 Publication 21ST ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages 729-732
Keywords
Abstract With the rise of the digital camera, people nowadays typically take several near-identical photos of the same scene to maximize the chances of a good shot. This paper proposes a user-friendly tool for exploring a personal photo gallery for selecting or even creating the best shot of a scene between its multiple alternatives. This functionality is realized through a graphical user interface where the best viewpoint can be selected from a generated panorama of the scene. Once the viewpoint is selected, the user is able to go explore possible alternatives coming from the other images. Using this tool, one can explore a photo gallery efficiently. Moreover, additional compositions from other images are also possible. With such additional compositions, one can go from a burst of photographs to the single best one. Even funny compositions of images, where you can duplicate a person in the same image, are possible with our proposed tool.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACM-MM
Notes ALTRES;ISE Approved no
Call Number (up) TGG2013 Serial 2368
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez; Daniel Ponsa; David Geronimo
Title Interactive Training of Human Detectors Type Book Chapter
Year 2013 Publication Multiodal Interaction in Image and Video Applications Abbreviated Journal
Volume 48 Issue Pages 169-182
Keywords Pedestrian Detection; Virtual World; AdaBoost; Domain Adaptation
Abstract Image based human detection remains as a challenging problem. Most promising detectors rely on classifiers trained with labelled samples. However, labelling is a manual labor intensive step. To overcome this problem we propose to collect images of pedestrians from a virtual city, i.e., with automatic labels, and train a pedestrian detector with them, which works fine when such virtual-world data are similar to testing one, i.e., real-world pedestrians in urban areas. When testing data is acquired in different conditions than training one, e.g., human detection in personal photo albums, dataset shift appears. In previous work, we cast this problem as one of domain adaptation and solve it with an active learning procedure. In this work, we focus on the same problem but evaluating a different set of faster to compute features, i.e., Haar, EOH and their combination. In particular, we train a classifier with virtual-world data, using such features and Real AdaBoost as learning machine. This classifier is applied to real-world training images. Then, a human oracle interactively corrects the wrong detections, i.e., few miss detections are manually annotated and some false ones are pointed out too. A low amount of manual annotation is fixed as restriction. Real- and virtual-world difficult samples are combined within what we call cool world and we retrain the classifier with this data. Our experiments show that this adapted classifier is equivalent to the one trained with only real-world data but requiring 90% less manual annotations.
Address Springer Heidelberg New York Dordrecht London
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1868-4394 ISBN 978-3-642-35931-6 Medium
Area Expedition Conference
Notes ADAS; 600.057; 600.054; 605.203 Approved no
Call Number (up) VLP2013; ADAS @ adas @ vlp2013 Serial 2193
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Antonio Lopez; Javier Marin; Daniel Ponsa
Title Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection Type Conference Article
Year 2013 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 467 - 472
Keywords Pedestrian Detection; Virtual World; Part based
Abstract State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).
Address Gold Coast; Australia; June 2013
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1931-0587 ISBN 978-1-4673-2754-1 Medium
Area Expedition Conference IV
Notes ADAS; 600.054; 600.057 Approved no
Call Number (up) XVL2013; ADAS @ adas @ xvl2013a Serial 2214
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa
Title Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers Type Conference Article
Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal
Volume Issue Pages 688 - 693
Keywords Pedestrian Detection; Domain Adaptation
Abstract Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.
Address Portland; oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes ADAS; 600.054; 600.057; 601.217 Approved yes
Call Number (up) XVR2013; ADAS @ adas @ xvr2013a Serial 2220
Permanent link to this record