toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author G. Lisanti; I. Masi; Andrew Bagdanov; Alberto del Bimbo edit  doi
openurl 
  Title Person Re-identification by Iterative Re-weighted Sparse Ranking Type Journal Article
  Year (down) 2015 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 37 Issue 8 Pages 1629 - 1642  
  Keywords  
  Abstract In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 601.240; 600.079 Approved no  
  Call Number Admin @ si @ LMB2015 Serial 2557  
Permanent link to this record
 

 
Author Debora Gil; David Roche; Agnes Borras; Jesus Giraldo edit  doi
openurl 
  Title Terminating Evolutionary Algorithms at their Steady State Type Journal Article
  Year (down) 2015 Publication Computational Optimization and Applications Abbreviated Journal COA  
  Volume 61 Issue 2 Pages 489-515  
  Keywords Evolutionary algorithms; Termination condition; Steady state; Differential evolution  
  Abstract Assessing the reliability of termination conditions for evolutionary algorithms (EAs) is of prime importance. An erroneous or weak stop criterion can negatively affect both the computational effort and the final result. We introduce a statistical framework for assessing whether a termination condition is able to stop an EA at its steady state, so that its results can not be improved anymore. We use a regression model in order to determine the requirements ensuring that a measure derived from EA evolving population is related to the distance to the optimum in decision variable space. Our framework is analyzed across 24 benchmark test functions and two standard termination criteria based on function fitness value in objective function space and EA population decision variable space distribution for the differential evolution (DE) paradigm. Results validate our framework as a powerful tool for determining the capability of a measure for terminating EA and the results also identify the decision variable space distribution as the best-suited for accurately terminating DE in real-world applications.  
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0926-6003 ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.044; 605.203; 600.060; 600.075 Approved no  
  Call Number Admin @ si @ GRB2015 Serial 2560  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection Type Conference Article
  Year (down) 2015 Publication IEEE Intelligent Vehicles Symposium IV2015 Abbreviated Journal  
  Volume Issue Pages 356-361  
  Keywords Pedestrian Detection  
  Abstract Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy.  
  Address Seoul; Corea; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IV  
  Notes ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number ADAS @ adas @ GVX2015 Serial 2625  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Sergi Robles; Gemma Sanchez edit  doi
openurl 
  Title CVC-FP and SGT: a new database for structural floor plan analysis and its groundtruthing tool Type Journal Article
  Year (down) 2015 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR  
  Volume 18 Issue 1 Pages 15-30  
  Keywords  
  Abstract Recent results on structured learning methods have shown the impact of structural information in a wide range of pattern recognition tasks. In the field of document image analysis, there is a long experience on structural methods for the analysis and information extraction of multiple types of documents. Yet, the lack of conveniently annotated and free access databases has not benefited the progress in some areas such as technical drawing understanding. In this paper, we present a floor plan database, named CVC-FP, that is annotated for the architectural objects and their structural relations. To construct this database, we have implemented a groundtruthing tool, the SGT tool, that allows to make specific this sort of information in a natural manner. This tool has been made for general purpose groundtruthing: It allows to define own object classes and properties, multiple labeling options are possible, grants the cooperative work, and provides user and version control. We finally have collected some of the recent work on floor plan interpretation and present a quantitative benchmark for this database. Both CVC-FP database and the SGT tool are freely released to the research community to ease comparisons between methods and boost reproducible research.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1433-2833 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; ADAS; 600.061; 600.076; 600.077 Approved no  
  Call Number Admin @ si @ HRR2015 Serial 2567  
Permanent link to this record
 

 
Author Mikhail Mozerov; Joost Van de Weijer edit  doi
openurl 
  Title Accurate stereo matching by two step global optimization Type Journal Article
  Year (down) 2015 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 24 Issue 3 Pages 1153-1163  
  Keywords  
  Abstract In stereo matching cost filtering methods and energy minimization algorithms are considered as two different techniques. Due to their global extend energy minimization methods obtain good stereo matching results. However, they tend to fail in occluded regions, in which cost filtering approaches obtain better results. In this paper we intend to combine both approaches with the aim to improve overall stereo matching results. We show that a global optimization with a fully connected model can be solved by cost fil tering methods. Based on this observation we propose to perform stereo matching as a two-step energy minimization algorithm. We consider two MRF models: a fully connected model defined on the complete set of pixels in an image and a conventional locally connected model. We solve the energy minimization problem for the fully connected model, after which the marginal function of the solution is used as the unary potential in the locally connected MRF model. Experiments on the Middlebury stereo datasets show that the proposed method achieves state-of-the-arts results.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ISE; LAMP; 600.079; 600.078 Approved no  
  Call Number Admin @ si @ MoW2015a Serial 2568  
Permanent link to this record
 

 
Author G.Thorvaldsen; Joana Maria Pujadas-Mora; T.Andersen ; L.Eikvil; Josep Llados; Alicia Fornes; Anna Cabre edit  url
openurl 
  Title A Tale of two Transcriptions Type Journal
  Year (down) 2015 Publication Historical Life Course Studies Abbreviated Journal  
  Volume 2 Issue Pages 1-19  
  Keywords Nominative Sources; Census; Vital Records; Computer Vision; Optical Character Recognition; Word Spotting  
  Abstract non-indexed
This article explains how two projects implement semi-automated transcription routines: for census sheets in Norway and marriage protocols from Barcelona. The Spanish system was created to transcribe the marriage license books from 1451 to 1905 for the Barcelona area; one of the world’s longest series of preserved vital records. Thus, in the Project “Five Centuries of Marriages” (5CofM) at the Autonomous University of Barcelona’s Center for Demographic Studies, the Barcelona Historical Marriage Database has been built. More than 600,000 records were transcribed by 150 transcribers working online. The Norwegian material is cross-sectional as it is the 1891 census, recorded on one sheet per person. This format and the underlining of keywords for several variables made it more feasible to semi-automate data entry than when many persons are listed on the same page. While Optical Character Recognition (OCR) for printed text is scientifically mature, computer vision research is now focused on more difficult problems such as handwriting recognition. In the marriage project, document analysis methods have been proposed to automatically recognize the marriage licenses. Fully automatic recognition is still a challenge, but some promising results have been obtained. In Spain, Norway and elsewhere the source material is available as scanned pictures on the Internet, opening up the possibility for further international cooperation concerning automating the transcription of historic source materials. Like what is being done in projects to digitize printed materials, the optimal solution is likely to be a combination of manual transcription and machine-assisted recognition also for hand-written sources.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2352-6343 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.077; 602.006 Approved no  
  Call Number Admin @ si @ TPA2015 Serial 2582  
Permanent link to this record
 

 
Author Antonio Hernandez edit  isbn
openurl 
  Title From pixels to gestures: learning visual representations for human analysis in color and depth data sequences Type Book Whole
  Year (down) 2015 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The visual analysis of humans from images is an important topic of interest due to its relevance to many computer vision applications like pedestrian detection, monitoring and surveillance, human-computer interaction, e-health or content-based image retrieval, among others.

In this dissertation we are interested in learning different visual representations of the human body that are helpful for the visual analysis of humans in images and video sequences. To that end, we analyze both RGB and depth image modalities and address the problem from three different research lines, at different levels of abstraction; from pixels to gestures: human segmentation, human pose estimation and gesture recognition.

First, we show how binary segmentation (object vs. background) of the human body in image sequences is helpful to remove all the background clutter present in the scene. The presented method, based on Graph cuts optimization, enforces spatio-temporal consistency of the produced segmentation masks among consecutive frames. Secondly, we present a framework for multi-label segmentation for obtaining much more detailed segmentation masks: instead of just obtaining a binary representation separating the human body from the background, finer segmentation masks can be obtained separating the different body parts.

At a higher level of abstraction, we aim for a simpler yet descriptive representation of the human body. Human pose estimation methods usually rely on skeletal models of the human body, formed by segments (or rectangles) that represent the body limbs, appropriately connected following the kinematic constraints of the human body. In practice, such skeletal models must fulfill some constraints in order to allow for efficient inference, while actually limiting the expressiveness of the model. In order to cope with this, we introduce a top-down approach for predicting the position of the body parts in the model, using a mid-level part representation based on Poselets.

Finally, we propose a framework for gesture recognition based on the bag of visual words framework. We leverage the benefits of RGB and depth image modalities by combining modality-specific visual vocabularies in a late fusion fashion. A new rotation-variant depth descriptor is presented, yielding better results than other state-of-the-art descriptors. Moreover, spatio-temporal pyramids are used to encode rough spatial and temporal structure. In addition, we present a probabilistic reformulation of Dynamic Time Warping for gesture segmentation in video sequences. A Gaussian-based probabilistic model of a gesture is learnt, implicitly encoding possible deformations in both spatial and time domains.
 
  Address January 2015  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Sergio Escalera;Stan Sclaroff  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-940902-0-2 Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ Her2015 Serial 2576  
Permanent link to this record
 

 
Author Hongxing Gao edit  isbn
openurl 
  Title Focused Structural Document Image Retrieval in Digital Mailroom Applications Type Book Whole
  Year (down) 2015 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this work, we develop a generic framework that is able to handle the document retrieval problem in various scenarios such as searching for full page matches or retrieving the counterparts for specific document areas, focusing on their structural similarity or letting their visual resemblance to play a dominant role. Based on the spatial indexing technique, we propose to search for matches of local key-region pairs carrying both structural and visual information from the collection while a scheme allowing to adjust the relative contribution of structural and visual similarity is presented.
Based on the fact that the structure of documents is tightly linked with the distance among their elements, we firstly introduce an efficient detector named Distance Transform based Maximally Stable Extremal Regions (DTMSER). We illustrate that this detector is able to efficiently extract the structure of a document image as a dendrogram (hierarchical tree) of multi-scale key-regions that roughly correspond to letters, words and paragraphs. We demonstrate that, without benefiting from the structure information, the key-regions extracted by the DTMSER algorithm achieve better results comparing with state-of-the-art methods while much less amount of key-regions are employed.
We subsequently propose a pair-wise Bag of Words (BoW) framework to efficiently embed the explicit structure extracted by the DTMSER algorithm. We represent each document as a list of key-region pairs that correspond to the edges in the dendrogram where inclusion relationship is encoded. By employing those structural key-region pairs as the pooling elements for generating the histogram of features, the proposed method is able to encode the explicit inclusion relations into a BoW representation. The experimental results illustrate that the pair-wise BoW, powered by the embedded structural information, achieves remarkable improvement over the conventional BoW and spatial pyramidal BoW methods.
To handle various retrieval scenarios in one framework, we propose to directly query a series of key-region pairs, carrying both structure and visual information, from the collection. We introduce the spatial indexing techniques to the document retrieval community to speed up the structural relationship computation for key-region pairs. We firstly test the proposed framework in a full page retrieval scenario where structurally similar matches are expected. In this case, the pair-wise querying method achieves notable improvement over the BoW and spatial pyramidal BoW frameworks. Furthermore, we illustrate that the proposed method is also able to handle focused retrieval situations where the queries are defined as a specific interesting partial areas of the images. We examine our method on two types of focused queries: structure-focused and exact queries. The experimental results show that, the proposed generic framework obtains nearly perfect precision on both types of focused queries while it is the first framework able to tackle structure-focused queries, setting a new state of the art in the field.
Besides, we introduce a line verification method to check the spatial consistency among the matched key-region pairs. We propose a computationally efficient version of line verification through a two step implementation. We first compute tentative localizations of the query and subsequently employ them to divide the matched key-region pairs into several groups, then line verification is performed within each group while more precise bounding boxes are computed. We demonstrate that, comparing with the standard approach (based on RANSAC), the line verification proposed generally achieves much higher recall with slight loss on precision on specific queries.
 
  Address January 2015  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Josep Llados;Dimosthenis Karatzas;Marçal Rusiñol  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-943427-0-7 Medium  
  Area Expedition Conference  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ Gao2015 Serial 2577  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title 3D-Guided Multiscale Sliding Window for Pedestrian Detection Type Conference Article
  Year (down) 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 560-568  
  Keywords Pedestrian Detection  
  Abstract The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IbPRIA  
  Notes ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number ADAS @ adas @ GVR2015 Serial 2585  
Permanent link to this record
 

 
Author Joost Van de Weijer; Fahad Shahbaz Khan edit   pdf
openurl 
  Title An Overview of Color Name Applications in Computer Vision Type Conference Article
  Year (down) 2015 Publication Computational Color Imaging Workshop Abbreviated Journal  
  Volume Issue Pages  
  Keywords color features; color names; object recognition  
  Abstract In this article we provide an overview of color name applications in computer vision. Color names are linguistic labels which humans use to communicate color. Computational color naming learns a mapping from pixels values to color names. In recent years color names have been applied to a wide variety of computer vision applications, including image classification, object recognition, texture classification, visual tracking and action recognition. Here we provide an overview of these results which show that in general color names outperform photometric invariants as a color representation.  
  Address Saint Etienne; France; March 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CCIW  
  Notes LAMP; 600.079; 600.068 Approved no  
  Call Number Admin @ si @ WeK2015 Serial 2586  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen edit  doi
openurl 
  Title Compact color texture description for texture classification Type Journal Article
  Year (down) 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 51 Issue Pages 16-22  
  Keywords  
  Abstract Describing textures is a challenging problem in computer vision and pattern recognition. The classification problem involves assigning a category label to the texture class it belongs to. Several factors such as variations in scale, illumination and viewpoint make the problem of texture description extremely challenging. A variety of histogram based texture representations exists in literature.
However, combining multiple texture descriptors and assessing their complementarity is still an open research problem. In this paper, we first show that combining multiple local texture descriptors significantly improves the recognition performance compared to using a single best method alone. This
gain in performance is achieved at the cost of high-dimensional final image representation. To counter this problem, we propose to use an information-theoretic compression technique to obtain a compact texture description without any significant loss in accuracy. In addition, we perform a comprehensive
evaluation of pure color descriptors, popular in object recognition, for the problem of texture classification. Experiments are performed on four challenging texture datasets namely, KTH-TIPS-2a, KTH-TIPS-2b, FMD and Texture-10. The experiments clearly demonstrate that our proposed compact multi-texture approach outperforms the single best texture method alone. In all cases, discriminative color names outperforms other color features for texture classification. Finally, we show that combining discriminative color names with compact texture representation outperforms state-of-the-art methods by 7:8%, 4:3% and 5:0% on KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets respectively.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ KRW2015a Serial 2587  
Permanent link to this record
 

 
Author Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Roca; Felipe Lumbreras edit  doi
openurl 
  Title Multi-part body segmentation based on depth maps for soft biometry analysis Type Journal Article
  Year (down) 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 56 Issue Pages 14-21  
  Keywords 3D shape context; 3D point cloud alignment; Depth maps; Human body segmentation; Soft biometry analysis  
  Abstract This paper presents a novel method extracting biometric measures using depth sensors. Given a multi-part labeled training data, a new subject is aligned to the best model of the dataset, and soft biometrics such as lengths or circumference sizes of limbs and body are computed. The process is performed by training relevant pose clusters, defining a representative model, and fitting a 3D shape context descriptor within an iterative matching procedure. We show robust measures by applying orthogonal plates to body hull. We test our approach in a novel full-body RGB-Depth data set, showing accurate estimation of soft biometrics and better segmentation accuracy in comparison with random forest approach without requiring large training data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; ISE; ADAS; 600.076;600.049; 600.063; 600.054; 302.018;MILAB Approved no  
  Call Number Admin @ si @ MEG2015 Serial 2588  
Permanent link to this record
 

 
Author Ivan Huerta; Marco Pedersoli; Jordi Gonzalez; Alberto Sanfeliu edit  doi
openurl 
  Title Combining where and what in change detection for unsupervised foreground learning in surveillance Type Journal Article
  Year (down) 2015 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 48 Issue 3 Pages 709-719  
  Keywords Object detection; Unsupervised learning; Motion segmentation; Latent variables; Support vector machine; Multiple appearance models; Video surveillance  
  Abstract Change detection is the most important task for video surveillance analytics such as foreground and anomaly detection. Current foreground detectors learn models from annotated images since the goal is to generate a robust foreground model able to detect changes in all possible scenarios. Unfortunately, manual labelling is very expensive. Most advanced supervised learning techniques based on generic object detection datasets currently exhibit very poor performance when applied to surveillance datasets because of the unconstrained nature of such environments in terms of types and appearances of objects. In this paper, we take advantage of change detection for training multiple foreground detectors in an unsupervised manner. We use statistical learning techniques which exploit the use of latent parameters for selecting the best foreground model parameters for a given scenario. In essence, the main novelty of our proposed approach is to combine the where (motion segmentation) and what (learning procedure) in change detection in an unsupervised way for improving the specificity and generalization power of foreground detectors at the same time. We propose a framework based on latent support vector machines that, given a noisy initialization based on motion cues, learns the correct position, aspect ratio, and appearance of all moving objects in a particular scene. Specificity is achieved by learning the particular change detections of a given scenario, and generalization is guaranteed since our method can be applied to any possible scene and foreground object, as demonstrated in the experimental results outperforming the state-of-the-art.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.063; 600.078 Approved no  
  Call Number Admin @ si @ HPG2015 Serial 2589  
Permanent link to this record
 

 
Author Wenjuan Gong; Y.Huang; Jordi Gonzalez; Liang Wang edit  openurl
  Title An Effective Solution to Double Counting Problem in Human Pose Estimation Type Miscellaneous
  Year (down) 2015 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords Pose estimation; double counting problem; mix-ture of parts Model  
  Abstract The mixture of parts model has been successfully applied to solve the 2D
human pose estimation problem either as an explicitly trained body part model
or as latent variables for pedestrian detection. Even in the era of massive
applications of deep learning techniques, the mixture of parts model is still
effective in solving certain problems, especially in the case with limited
numbers of training samples. In this paper, we consider using the mixture of
parts model for pose estimation, wherein a tree structure is utilized for
representing relations between connected body parts. This strategy facilitates
training and inferencing of the model but suffers from double counting
problems, where one detected body part is counted twice due to lack of
constrains among unconnected body parts. To solve this problem, we propose a
generalized solution in which various part attributes are captured by multiple
features so as to avoid the double counted problem. Qualitative and
quantitative experimental results on a public available dataset demonstrate the
effectiveness of our proposed method.

An Effective Solution to Double Counting Problem in Human Pose Estimation – ResearchGate. Available from: http://www.researchgate.net/publication/271218491AnEffectiveSolutiontoDoubleCountingProbleminHumanPose_Estimation [accessed Oct 22, 2015].
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.078 Approved no  
  Call Number Admin @ si @ GHG2015 Serial 2590  
Permanent link to this record
 

 
Author Sergio Escalera; Jordi Gonzalez; Xavier Baro; Pablo Pardo; Junior Fabian; Marc Oliu; Hugo Jair Escalante; Ivan Huerta; Isabelle Guyon edit   pdf
url  doi
openurl 
  Title ChaLearn Looking at People 2015 new competitions: Age Estimation and Cultural Event Recognition Type Conference Article
  Year (down) 2015 Publication IEEE International Joint Conference on Neural Networks IJCNN2015 Abbreviated Journal  
  Volume Issue Pages 1-8  
  Keywords  
  Abstract Following previous series on Looking at People (LAP) challenges [1], [2], [3], in 2015 ChaLearn runs two new competitions within the field of Looking at People: age and cultural event recognition in still images. We propose thefirst crowdsourcing application to collect and label data about apparent
age of people instead of the real age. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes both challenges and data, providing some initial baselines. The results of the first round of the competition were presented at ChaLearn LAP 2015 IJCNN special session on computer vision and robotics http://www.dtic.ua.es/∼jgarcia/IJCNN2015.
Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.
 
  Address Killarney; Ireland; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCNN  
  Notes HuPBA; ISE; 600.063; 600.078;MV Approved no  
  Call Number Admin @ si @ EGB2015 Serial 2591  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: