|   | 
Details
   web
Records
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez
Title 3D-Guided Multiscale Sliding Window for Pedestrian Detection Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 560-568
Keywords Pedestrian Detection
Abstract The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy.
Address Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area ACDC Expedition Conference IbPRIA
Notes ADAS; 600.076; 600.057; 600.054 Approved no
Call Number ADAS @ adas @ GVR2015 Serial 2585
Permanent link to this record
 

 
Author Joost Van de Weijer; Fahad Shahbaz Khan
Title An Overview of Color Name Applications in Computer Vision Type Conference Article
Year 2015 Publication Computational Color Imaging Workshop Abbreviated Journal
Volume Issue Pages
Keywords color features; color names; object recognition
Abstract In this article we provide an overview of color name applications in computer vision. Color names are linguistic labels which humans use to communicate color. Computational color naming learns a mapping from pixels values to color names. In recent years color names have been applied to a wide variety of computer vision applications, including image classification, object recognition, texture classification, visual tracking and action recognition. Here we provide an overview of these results which show that in general color names outperform photometric invariants as a color representation.
Address Saint Etienne; France; March 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIW
Notes LAMP; 600.079; 600.068 Approved no
Call Number Admin @ si @ WeK2015 Serial 2586
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen
Title Compact color texture description for texture classification Type Journal Article
Year 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 51 Issue Pages 16-22
Keywords
Abstract Describing textures is a challenging problem in computer vision and pattern recognition. The classification problem involves assigning a category label to the texture class it belongs to. Several factors such as variations in scale, illumination and viewpoint make the problem of texture description extremely challenging. A variety of histogram based texture representations exists in literature.
However, combining multiple texture descriptors and assessing their complementarity is still an open research problem. In this paper, we first show that combining multiple local texture descriptors significantly improves the recognition performance compared to using a single best method alone. This
gain in performance is achieved at the cost of high-dimensional final image representation. To counter this problem, we propose to use an information-theoretic compression technique to obtain a compact texture description without any significant loss in accuracy. In addition, we perform a comprehensive
evaluation of pure color descriptors, popular in object recognition, for the problem of texture classification. Experiments are performed on four challenging texture datasets namely, KTH-TIPS-2a, KTH-TIPS-2b, FMD and Texture-10. The experiments clearly demonstrate that our proposed compact multi-texture approach outperforms the single best texture method alone. In all cases, discriminative color names outperforms other color features for texture classification. Finally, we show that combining discriminative color names with compact texture representation outperforms state-of-the-art methods by 7:8%, 4:3% and 5:0% on KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets respectively.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.068; 600.079;ADAS Approved no
Call Number Admin @ si @ KRW2015a Serial 2587
Permanent link to this record
 

 
Author Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Roca; Felipe Lumbreras
Title Multi-part body segmentation based on depth maps for soft biometry analysis Type Journal Article
Year 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 56 Issue Pages 14-21
Keywords 3D shape context; 3D point cloud alignment; Depth maps; Human body segmentation; Soft biometry analysis
Abstract This paper presents a novel method extracting biometric measures using depth sensors. Given a multi-part labeled training data, a new subject is aligned to the best model of the dataset, and soft biometrics such as lengths or circumference sizes of limbs and body are computed. The process is performed by training relevant pose clusters, defining a representative model, and fitting a 3D shape context descriptor within an iterative matching procedure. We show robust measures by applying orthogonal plates to body hull. We test our approach in a novel full-body RGB-Depth data set, showing accurate estimation of soft biometrics and better segmentation accuracy in comparison with random forest approach without requiring large training data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; ISE; ADAS; 600.076;600.049; 600.063; 600.054; 302.018;MILAB Approved no
Call Number Admin @ si @ MEG2015 Serial 2588
Permanent link to this record
 

 
Author Ivan Huerta; Marco Pedersoli; Jordi Gonzalez; Alberto Sanfeliu
Title Combining where and what in change detection for unsupervised foreground learning in surveillance Type Journal Article
Year 2015 Publication Pattern Recognition Abbreviated Journal PR
Volume 48 Issue 3 Pages 709-719
Keywords Object detection; Unsupervised learning; Motion segmentation; Latent variables; Support vector machine; Multiple appearance models; Video surveillance
Abstract Change detection is the most important task for video surveillance analytics such as foreground and anomaly detection. Current foreground detectors learn models from annotated images since the goal is to generate a robust foreground model able to detect changes in all possible scenarios. Unfortunately, manual labelling is very expensive. Most advanced supervised learning techniques based on generic object detection datasets currently exhibit very poor performance when applied to surveillance datasets because of the unconstrained nature of such environments in terms of types and appearances of objects. In this paper, we take advantage of change detection for training multiple foreground detectors in an unsupervised manner. We use statistical learning techniques which exploit the use of latent parameters for selecting the best foreground model parameters for a given scenario. In essence, the main novelty of our proposed approach is to combine the where (motion segmentation) and what (learning procedure) in change detection in an unsupervised way for improving the specificity and generalization power of foreground detectors at the same time. We propose a framework based on latent support vector machines that, given a noisy initialization based on motion cues, learns the correct position, aspect ratio, and appearance of all moving objects in a particular scene. Specificity is achieved by learning the particular change detections of a given scenario, and generalization is guaranteed since our method can be applied to any possible scene and foreground object, as demonstrated in the experimental results outperforming the state-of-the-art.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.063; 600.078 Approved no
Call Number Admin @ si @ HPG2015 Serial 2589
Permanent link to this record
 

 
Author Wenjuan Gong; Y.Huang; Jordi Gonzalez; Liang Wang
Title An Effective Solution to Double Counting Problem in Human Pose Estimation Type Miscellaneous
Year 2015 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords Pose estimation; double counting problem; mix-ture of parts Model
Abstract The mixture of parts model has been successfully applied to solve the 2D
human pose estimation problem either as an explicitly trained body part model
or as latent variables for pedestrian detection. Even in the era of massive
applications of deep learning techniques, the mixture of parts model is still
effective in solving certain problems, especially in the case with limited
numbers of training samples. In this paper, we consider using the mixture of
parts model for pose estimation, wherein a tree structure is utilized for
representing relations between connected body parts. This strategy facilitates
training and inferencing of the model but suffers from double counting
problems, where one detected body part is counted twice due to lack of
constrains among unconnected body parts. To solve this problem, we propose a
generalized solution in which various part attributes are captured by multiple
features so as to avoid the double counted problem. Qualitative and
quantitative experimental results on a public available dataset demonstrate the
effectiveness of our proposed method.

An Effective Solution to Double Counting Problem in Human Pose Estimation – ResearchGate. Available from: http://www.researchgate.net/publication/271218491AnEffectiveSolutiontoDoubleCountingProbleminHumanPose_Estimation [accessed Oct 22, 2015].
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.078 Approved no
Call Number Admin @ si @ GHG2015 Serial 2590
Permanent link to this record
 

 
Author Sergio Escalera; Jordi Gonzalez; Xavier Baro; Pablo Pardo; Junior Fabian; Marc Oliu; Hugo Jair Escalante; Ivan Huerta; Isabelle Guyon
Title ChaLearn Looking at People 2015 new competitions: Age Estimation and Cultural Event Recognition Type Conference Article
Year 2015 Publication IEEE International Joint Conference on Neural Networks IJCNN2015 Abbreviated Journal
Volume Issue Pages 1-8
Keywords
Abstract Following previous series on Looking at People (LAP) challenges [1], [2], [3], in 2015 ChaLearn runs two new competitions within the field of Looking at People: age and cultural event recognition in still images. We propose thefirst crowdsourcing application to collect and label data about apparent
age of people instead of the real age. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes both challenges and data, providing some initial baselines. The results of the first round of the competition were presented at ChaLearn LAP 2015 IJCNN special session on computer vision and robotics http://www.dtic.ua.es/∼jgarcia/IJCNN2015.
Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.
Address Killarney; Ireland; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IJCNN
Notes HuPBA; ISE; 600.063; 600.078;MV Approved no
Call Number Admin @ si @ EGB2015 Serial 2591
Permanent link to this record
 

 
Author Frederic Sampedro; Anna Domenech; Sergio Escalera; Ignasi Carrio
Title Deriving global quantitative tumor response parameters from 18F-FDG PET-CT scans in patients with non-Hodgkins lymphoma Type Journal Article
Year 2015 Publication Nuclear Medicine Communications Abbreviated Journal NMC
Volume 36 Issue 4 Pages 328-333
Keywords
Abstract OBJECTIVES:
The aim of the study was to address the need for quantifying the global cancer time evolution magnitude from a pair of time-consecutive positron emission tomography-computed tomography (PET-CT) scans. In particular, we focus on the computation of indicators using image-processing techniques that seek to model non-Hodgkin's lymphoma (NHL) progression or response severity.
MATERIALS AND METHODS:
A total of 89 pairs of time-consecutive PET-CT scans from NHL patients were stored in a nuclear medicine station for subsequent analysis. These were classified by a consensus of nuclear medicine physicians into progressions, partial responses, mixed responses, complete responses, and relapses. The cases of each group were ordered by magnitude following visual analysis. Thereafter, a set of quantitative indicators designed to model the cancer evolution magnitude within each group were computed using semiautomatic and automatic image-processing techniques. Performance evaluation of the proposed indicators was measured by a correlation analysis with the expert-based visual analysis.
RESULTS:
The set of proposed indicators achieved Pearson's correlation results in each group with respect to the expert-based visual analysis: 80.2% in progressions, 77.1% in partial response, 68.3% in mixed response, 88.5% in complete response, and 100% in relapse. In the progression and mixed response groups, the proposed indicators outperformed the common indicators used in clinical practice [changes in metabolic tumor volume, mean, maximum, peak standardized uptake value (SUV mean, SUV max, SUV peak), and total lesion glycolysis] by more than 40%.
CONCLUSION:
Computing global indicators of NHL response using PET-CT imaging techniques offers a strong correlation with the associated expert-based visual analysis, motivating the future incorporation of such quantitative and highly observer-independent indicators in oncological decision making or treatment response evaluation scenarios.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ SDE2015 Serial 2605
Permanent link to this record
 

 
Author Wenjuan Gong; W.Zhang; Jordi Gonzalez; Y.Ren; Z.Li
Title Enhanced Asymmetric Bilinear Model for Face Recognition Type Journal Article
Year 2015 Publication International Journal of Distributed Sensor Networks Abbreviated Journal IJDSN
Volume Issue Pages Article ID 218514
Keywords
Abstract Bilinear models have been successfully applied to separate two factors, for example, pose variances and different identities in face recognition problems. Asymmetric model is a type of bilinear model which models a system in the most concise way. But seldom there are works exploring the applications of asymmetric bilinear model on face recognition problem with illumination changes. In this work, we propose enhanced asymmetric model for illumination-robust face recognition. Instead of initializing the factor probabilities randomly, we initialize them with nearest neighbor method and optimize them for the test data. Above that, we update the factor model to be identified. We validate the proposed method on a designed data sample and extended Yale B dataset. The experiment results show that the enhanced asymmetric models give promising results and good recognition accuracies.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.063; 600.078 Approved no
Call Number Admin @ si @ GZG2015 Serial 2592
Permanent link to this record
 

 
Author Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio
Title FitNets: Hints for Thin Deep Nets Type Conference Article
Year 2015 Publication 3rd International Conference on Learning Representations ICLR2015 Abbreviated Journal
Volume Issue Pages
Keywords Computer Science ; Learning; Computer Science ;Neural and Evolutionary Computing
Abstract While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
Address San Diego; CA; May 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICLR
Notes MILAB Approved no
Call Number Admin @ si @ RBK2015 Serial 2593
Permanent link to this record
 

 
Author Adriana Romero; Petia Radeva; Carlo Gatta
Title Meta-parameter free unsupervised sparse feature learning Type Journal Article
Year 2015 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 37 Issue 8 Pages 1716-1722
Keywords
Abstract We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity. Experiments on CIFAR-10, STL- 10 and UCMerced show that the method achieves the state-of-theart performance, providing discriminative features that generalize well.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; 600.068; 600.079; 601.160 Approved no
Call Number Admin @ si @ RRG2014b Serial 2594
Permanent link to this record
 

 
Author Christophe Rigaud; Clement Guerin; Dimosthenis Karatzas; Jean-Christophe Burie; Jean-Marc Ogier
Title Knowledge-driven understanding of images in comic books Type Journal Article
Year 2015 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR
Volume 18 Issue 3 Pages 199-221
Keywords Document Understanding; comics analysis; expert system
Abstract Document analysis is an active field of research, which can attain a complete understanding of the semantics of a given document. One example of the document understanding process is enabling a computer to identify the key elements of a comic book story and arrange them according to a predefined domain knowledge. In this study, we propose a knowledge-driven system that can interact with bottom-up and top-down information to progressively understand the content of a document. We model the comic book’s and the image processing domains knowledge for information consistency analysis. In addition, different image processing methods are improved or developed to extract panels, balloons, tails, texts, comic characters and their semantic relations in an unsupervised way.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1433-2833 ISBN Medium
Area Expedition Conference
Notes DAG; 600.056; 600.077 Approved no
Call Number RGK2015 Serial 2595
Permanent link to this record
 

 
Author Marc Bolaños; Maite Garolera; Petia Radeva
Title Object Discovery using CNN Features in Egocentric Videos Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 67-74
Keywords Object discovery; Egocentric videos; Lifelogging; CNN
Abstract Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach.
Address Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MILAB Approved no
Call Number Admin @ si @ BGR2015 Serial 2596
Permanent link to this record
 

 
Author Estefania Talavera; Mariella Dimiccoli; Marc Bolaños; Maedeh Aghaei; Petia Radeva
Title R-clustering for egocentric video segmentation Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 327-336
Keywords Temporal video segmentation; Egocentric videos; Clustering
Abstract In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering their frames, we combine the clustering with a concept drift detection technique (ADWIN) that has rigorous guarantee of performances. ADWIN serves as a statistical upper bound for the clustering-based video segmentation. We integrate both techniques in an energy-minimization framework that serves to disambiguate the decision of both techniques and to complete the segmentation taking into account the temporal continuity of video frames descriptors. We present experiments over egocentric sets of more than 13.000 images acquired with different wearable cameras, showing that our method outperforms state-of-the-art clustering methods.
Address Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MILAB Approved no
Call Number Admin @ si @ TDB2015 Serial 2597
Permanent link to this record
 

 
Author Manuel Graña; Bogdan Raducanu
Title Special Issue on Bioinspired and knowledge based techniques and applications Type Journal Article
Year 2015 Publication Neurocomputing Abbreviated Journal NEUCOM
Volume Issue Pages 1-3
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (up) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number Admin @ si @ GrR2015 Serial 2598
Permanent link to this record