|   | 
Details
   web
Records
Author Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Ludmila I. Kuncheva
Title Occlusion handling via random subspace classifiers for human detection Type Journal Article
Year 2014 Publication IEEE Transactions on Systems, Man, and Cybernetics (Part B) Abbreviated Journal TSMCB
Volume 44 Issue 3 Pages 342-354
Keywords Pedestriand Detection; occlusion handling
Abstract This paper describes a general method to address partial occlusions for human detection in still images. The Random Subspace Method (RSM) is chosen for building a classifier ensemble robust against partial occlusions. The component classifiers are chosen on the basis of their individual and combined performance. The main contribution of this work lies in our approach’s capability to improve the detection rate when partial occlusions are present without compromising the detection performance on non occluded data. In contrast to many recent approaches, we propose a method which does not require manual labelling of body parts, defining any semantic spatial components, or using additional data coming from motion or stereo. Moreover, the method can be easily extended to other object classes. The experiments are performed on three large datasets: the INRIA person dataset, the Daimler Multicue dataset, and a new challenging dataset, called PobleSec, in which a considerable number of targets are partially occluded. The different approaches are evaluated at the classification and detection levels for both partially occluded and non-occluded data. The experimental results show that our detector outperforms state-of-the-art approaches in the presence of partial occlusions, while offering performance and reliability similar to those of the holistic approach on non-occluded data. The datasets used in our experiments have been made publicly available for benchmarking purposes
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2168-2267 ISBN Medium
Area Expedition Conference
Notes ADAS; 605.203; 600.057; 600.054; 601.042; 601.187; 600.076 Approved no
Call Number ADAS @ adas @ MVL2014 Serial 2213
Permanent link to this record
 

 
Author Oscar Lopes; Miguel Reyes; Sergio Escalera; Jordi Gonzalez
Title Spherical Blurred Shape Model for 3-D Object and Pose Recognition: Quantitative Analysis and HCI Applications in Smart Environments Type Journal Article
Year 2014 Publication IEEE Transactions on Systems, Man and Cybernetics (Part B) Abbreviated Journal TSMCB
Volume 44 Issue 12 Pages 2379-2390
Keywords
Abstract The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2168-2267 ISBN Medium
Area Expedition Conference
Notes HuPBA; ISE; 600.078;MILAB Approved no
Call Number Admin @ si @ LRE2014 Serial 2442
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; David Vazquez; Antonio Lopez; Jaume Amores
Title On-Board Object Detection: Multicue, Multimodal, and Multiview Random Forest of Local Experts Type Journal Article
Year 2017 Publication IEEE Transactions on cybernetics Abbreviated Journal Cyber
Volume 47 Issue 11 Pages 3980 - 3990
Keywords Multicue; multimodal; multiview; object detection
Abstract Despite recent significant advances, object detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities, and a strong multiview (MV) classifier that accounts for different object views and poses. In this paper, we provide an extensive evaluation that gives insight into how each of these aspects (multicue, multimodality, and strong MV classifier) affect accuracy both individually and when integrated together. In the multimodality component, we explore the fusion of RGB and depth maps obtained by high-definition light detection and ranging, a type of modality that is starting to receive increasing attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the accuracy, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2168-2267 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.085; 600.082; 600.076; 600.118 Approved no
Call Number Admin @ si @ Serial 2810
Permanent link to this record
 

 
Author C. Butakoff; Simone Balocco; F.M. Sukno; C. Hoogendoorn; C. Tobon-Gomez; G. Avegliano; A.F. Frangi
Title Left-ventricular Epi- and Endocardium Extraction from 3D Ultrasound Images Using an Automatically Constructed 3D ASM Type Journal Article
Year 2016 Publication Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization Abbreviated Journal CMBBE
Volume 4 Issue 5 Pages 265-280
Keywords ASM; cardiac segmentation; statistical model; shape model; 3D ultrasound; cardiac segmentation
Abstract In this paper, we propose an automatic method for constructing an active shape model (ASM) to segment the complete cardiac left ventricle in 3D ultrasound (3DUS) images, which avoids costly manual landmarking. The automatic construction of the ASM has already been addressed in the literature; however, the direct application of these methods to 3DUS is hampered by a high level of noise and artefacts. Therefore, we propose to construct the ASM by fusing the multidetector computed tomography data, to learn the shape, with the artificially generated 3DUS, in order to learn the neighbourhood of the boundaries. Our artificial images were generated by two approaches: a faster one that does not take into account the geometry of the transducer, and a more comprehensive one, implemented in Field II toolbox. The segmentation accuracy of our ASM was evaluated on 20 patients with left-ventricular asynchrony, demonstrating plausibility of the approach.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2168-1163 ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ BBS2016 Serial 2449
Permanent link to this record
 

 
Author David Fernandez; Pau Riba; Alicia Fornes; Josep Llados
Title On the Influence of Key Point Encoding for Handwritten Word Spotting Type Conference Article
Year 2014 Publication 14th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal
Volume Issue Pages 476 - 481
Keywords Local descriptors; Interest points; Handwritten documents; Word spotting; Historical document analysis
Abstract In this paper we evaluate the influence of the selection of key points and the associated features in the performance of word spotting processes. In general, features can be extracted from a number of characteristic points like corners, contours, skeletons, maxima, minima, crossings, etc. A number of descriptors exist in the literature using different interest point detectors. But the intrinsic variability of handwriting vary strongly on the performance if the interest points are not stable enough. In this paper, we analyze the performance of different descriptors for local interest points. As benchmarking dataset we have used the Barcelona Marriage Database that contains handwritten records of marriages over five centuries.
Address Creete Island; Grecia; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2167-6445 ISBN 978-1-4799-4335-7 Medium
Area Expedition Conference ICFHR
Notes DAG; 600.056; 600.061; 602.006; 600.077 Approved no
Call Number Admin @ si @ FRF2014 Serial 2460
Permanent link to this record
 

 
Author Pau Riba; Jon Almazan; Alicia Fornes; David Fernandez; Ernest Valveny; Josep Llados
Title e-Crowds: a mobile platform for browsing and searching in historical demographyrelated manuscripts Type Conference Article
Year 2014 Publication 14th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal
Volume Issue Pages 228 - 233
Keywords
Abstract This paper presents a prototype system running on portable devices for browsing and word searching through historical handwritten document collections. The platform adapts the paradigm of eBook reading, where the narrative is not necessarily sequential, but centered on the user actions. The novelty is to replace digitally born books by digitized historical manuscripts of marriage licenses, so document analysis tasks are required in the browser. With an active reading paradigm, the user can cast queries of people names, so he/she can implicitly follow genealogical links. In addition, the system allows combined searches: the user can refine a search by adding more words to search. As a second contribution, the retrieval functionality involves as a core technology a word spotting module with an unified approach, which allows combined query searches, and also two input modalities: query-by-example, and query-by-string.
Address Creete Island; Grecia; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2167-6445 ISBN 978-1-4799-4335-7 Medium
Area Expedition Conference ICFHR
Notes DAG; 600.056; 600.045; 600.061; 602.006; 600.077 Approved no
Call Number Admin @ si @ RAF2014 Serial 2463
Permanent link to this record
 

 
Author Arnau Baro; Pau Riba; Alicia Fornes
Title Towards the recognition of compound music notes in handwritten music scores Type Conference Article
Year 2016 Publication 15th international conference on Frontiers in Handwriting Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract The recognition of handwritten music scores still remains an open problem. The existing approaches can only deal with very simple handwritten scores mainly because of the variability in the handwriting style and the variability in the composition of groups of music notes (i.e. compound music notes). In this work we focus on this second problem and propose a method based on perceptual grouping for the recognition of compound music notes. Our method has been tested using several handwritten music scores of the CVC-MUSCIMA database and compared with a commercial Optical Music Recognition (OMR) software. Given that our method is learning-free, the obtained results are promising.
Address Shenzhen; China; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2167-6445 ISBN Medium
Area Expedition Conference ICFHR
Notes DAG; 600.097 Approved no
Call Number Admin @ si @ BRF2016 Serial 2903
Permanent link to this record
 

 
Author Katerine Diaz; Francesc J. Ferri; W. Diaz
Title Incremental Generalized Discriminative Common Vectors for Image Classification Type Journal Article
Year 2015 Publication IEEE Transactions on Neural Networks and Learning Systems Abbreviated Journal TNNLS
Volume 26 Issue 8 Pages 1761 - 1775
Keywords
Abstract Subspace-based methods have become popular due to their ability to appropriately represent complex data in such a way that both dimensionality is reduced and discriminativeness is enhanced. Several recent works have concentrated on the discriminative common vector (DCV) method and other closely related algorithms also based on the concept of null space. In this paper, we present a generalized incremental formulation of the DCV methods, which allows the update of a given model by considering the addition of new examples even from unseen classes. Having efficient incremental formulations of well-behaved batch algorithms allows us to conveniently adapt previously trained classifiers without the need of recomputing them from scratch. The proposed generalized incremental method has been empirically validated in different case studies from different application domains (faces, objects, and handwritten digits) considering several different scenarios in which new data are continuously added at different rates starting from an initial model.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2162-237X ISBN Medium
Area Expedition Conference
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ DFD2015 Serial 2547
Permanent link to this record
 

 
Author Sergio Escalera; Eloi Puertas; Petia Radeva; Oriol Pujol
Title Multimodal laughter recognition in video conversations Type Conference Article
Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal
Volume Issue Pages 110–115
Keywords
Abstract Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier.
Address Miami (USA)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2160-7508 ISBN 978-1-4244-3994-2 Medium
Area Expedition Conference CVPR
Notes MILAB;HuPBA Approved no
Call Number BCNPCL @ bcnpcl @ EPR2009c Serial 1188
Permanent link to this record
 

 
Author Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera
Title Dominance Detection in Face-to-face Conversations Type Conference Article
Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal
Volume Issue Pages 97–102
Keywords
Abstract Dominance is referred to the level of influence a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on dominance detection from visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers opinion. Moreover, the considered indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analysis shows a high correlation and allows the categorization of dominant people in public discussion video sequences.
Address Miami, USA
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2160-7508 ISBN 978-1-4244-3994-2 Medium
Area Expedition Conference CVPR
Notes HuPBA; OR; MILAB;MV Approved no
Call Number BCNPCL @ bcnpcl @ EMV2009 Serial 1227
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu
Title Single Snapshot 3D Head Pose Initialization for Tracking in Human Robot Interaction Scenario Type Conference Article
Year 2010 Publication 1st International Workshop on Computer Vision for Human-Robot Interaction Abbreviated Journal
Volume Issue Pages 32–39
Keywords 1st International Workshop on Computer Vision for Human-Robot Interaction, in conjunction with IEEE CVPR 2010
Abstract This paper presents an automatic 3D head pose initialization scheme for a real-time face tracker with application to human-robot interaction. It has two main contributions. First, we propose an automatic 3D head pose and person specific face shape estimation, based on a 3D deformable model. The proposed approach serves to initialize our realtime 3D face tracker. What makes this contribution very attractive is that the initialization step can cope with faces
under arbitrary pose, so it is not limited only to near-frontal views. Second, the previous framework is used to develop an application in which the orientation of an AIBO’s camera can be controlled through the imitation of user’s head pose.
In our scenario, this application is used to build panoramic images from overlapping snapshots. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
Address San Francisco; CA; USA; June 2010
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2160-7508 ISBN 978-1-4244-7029-7 Medium
Area Expedition Conference CVPRW
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ DoR2010a Serial 1309
Permanent link to this record
 

 
Author Michal Drozdzal; Laura Igual; Petia Radeva; Jordi Vitria; Carolina Malagelada; Fernando Azpiroz
Title Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy Type Conference Article
Year 2010 Publication IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis Abbreviated Journal
Volume Issue Pages 117–124
Keywords
Abstract Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.
Address San Francisco; CA; USA; June 2010
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2160-7508 ISBN 978-1-4244-7029-7 Medium
Area Expedition Conference MMBIA
Notes OR;MILAB;MV Approved no
Call Number BCNPCL @ bcnpcl @ DIR2010 Serial 1316
Permanent link to this record
 

 
Author Antonio Hernandez; Miguel Reyes; Sergio Escalera; Petia Radeva
Title Spatio-Temporal GrabCut human segmentation for face and pose recovery Type Conference Article
Year 2010 Publication IEEE International Workshop on Analysis and Modeling of Faces and Gestures Abbreviated Journal
Volume Issue Pages 33–40
Keywords
Abstract In this paper, we present a full-automatic Spatio-Temporal GrabCut human segmentation methodology. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model for seed initialization. Spatial information is included by means of Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, human segmentation is combined with Shape and Active Appearance Models to perform full face and pose recovery. Results over public data sets as well as proper human action base show a robust segmentation and recovery of both face and pose using the presented methodology.
Address San Francisco; CA; USA; June 2010
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2160-7508 ISBN 978-1-4244-7029-7 Medium
Area Expedition Conference AMFG
Notes MILAB;HUPBA Approved no
Call Number BCNPCL @ bcnpcl @ HRE2010 Serial 1362
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez
Title Vehicle geolocalization based on video synchronization Type Conference Article
Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal
Volume Issue Pages 1511–1516
Keywords video alignment
Abstract TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
Address Madeira Island (Portugal)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2153-0009 ISBN 978-1-4244-7657-2 Medium
Area Expedition Conference ITSC
Notes ADAS Approved no
Call Number ADAS @ adas @ DPS2010 Serial 1423
Permanent link to this record
 

 
Author Ferran Diego; Jose Manuel Alvarez; Joan Serrat; Antonio Lopez
Title Vision-based road detection via on-line video registration Type Conference Article
Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal
Volume Issue Pages 1135–1140
Keywords video alignment; road detection
Abstract TB6.2
Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region.
Address Madeira Island (Portugal)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 2153-0009 ISBN 978-1-4244-7657-2 Medium
Area Expedition Conference ITSC
Notes ADAS Approved no
Call Number ADAS @ adas @ DAS2010 Serial 1424
Permanent link to this record