|   | 
Details
   web
Records
Author Cristhian A. Aguilera-Carrasco; Angel Sappa; Ricardo Toledo
Title LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations Type Conference Article
Year 2015 Publication 22th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 178 - 181
Keywords
Abstract
Address (up) Quebec; Canada; September 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ AST2015 Serial 2630
Permanent link to this record
 

 
Author Eduardo Tusa; Arash Akbarinia; Raquel Gil Rodriguez; Corina Barbalata
Title Real-Time Face Detection and Tracking Utilising OpenMP and ROS Type Conference Article
Year 2015 Publication 3rd Asia-Pacific Conference on Computer Aided System Engineering Abbreviated Journal
Volume Issue Pages 179 - 184
Keywords RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP
Abstract The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in
low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction.
Address (up) Quito; Ecuador; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference APCASE
Notes NEUROBIT Approved no
Call Number Admin @ si @ TAG2015 Serial 2659
Permanent link to this record
 

 
Author Joost Van de Weijer; Fahad Shahbaz Khan
Title An Overview of Color Name Applications in Computer Vision Type Conference Article
Year 2015 Publication Computational Color Imaging Workshop Abbreviated Journal
Volume Issue Pages
Keywords color features; color names; object recognition
Abstract In this article we provide an overview of color name applications in computer vision. Color names are linguistic labels which humans use to communicate color. Computational color naming learns a mapping from pixels values to color names. In recent years color names have been applied to a wide variety of computer vision applications, including image classification, object recognition, texture classification, visual tracking and action recognition. Here we provide an overview of these results which show that in general color names outperform photometric invariants as a color representation.
Address (up) Saint Etienne; France; March 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIW
Notes LAMP; 600.079; 600.068 Approved no
Call Number Admin @ si @ WeK2015 Serial 2586
Permanent link to this record
 

 
Author Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio
Title FitNets: Hints for Thin Deep Nets Type Conference Article
Year 2015 Publication 3rd International Conference on Learning Representations ICLR2015 Abbreviated Journal
Volume Issue Pages
Keywords Computer Science ; Learning; Computer Science ;Neural and Evolutionary Computing
Abstract While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
Address (up) San Diego; CA; May 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICLR
Notes MILAB Approved no
Call Number Admin @ si @ RBK2015 Serial 2593
Permanent link to this record
 

 
Author Adria Ruiz; Joost Van de Weijer; Xavier Binefa
Title From emotions to action units with hidden and semi-hidden-task learning Type Conference Article
Year 2015 Publication 16th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 3703-3711
Keywords
Abstract Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units)for which samples are not available but, in contrast, training data is easier to obtain from a set of related VisibleTasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.
Address (up) Santiago de Chile; Chile; December 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes LAMP; 600.068; 600.079 Approved no
Call Number Admin @ si @ RWB2015 Serial 2671
Permanent link to this record
 

 
Author Sergio Escalera; Junior Fabian; Pablo Pardo; Xavier Baro; Jordi Gonzalez; Hugo Jair Escalante; Marc Oliu; Dusan Misevic; Ulrich Steiner; Isabelle Guyon
Title ChaLearn Looking at People 2015: Apparent Age and Cultural Event Recognition Datasets and Results Type Conference Article
Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 243 - 251
Keywords
Abstract Following previous series on Looking at People (LAP) competitions [14, 13, 11, 12, 2], in 2015 ChaLearn ran two new competitions within the field of Looking at People: (1) age estimation, and (2) cultural event recognition, both in
still images. We developed a crowd-sourcing application to collect and label data about the apparent age of people (as opposed to the real age). In terms of cultural event recognition, one hundred categories had to be recognized. These
tasks involved scene understanding and human body analysis. This paper summarizes both challenges and data, as well as the results achieved by the participants of the competition.
Address (up) Santiago de Chile; December 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes ISE; 600.063; 600.078;MV Approved no
Call Number Admin @ si @ EFP2015 Serial 2704
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas
Title Evaluating Real-Time Mirroring of Head Gestures using Smart Glasses Type Conference Article
Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 452-460
Keywords
Abstract Mirroring occurs when one person tends to mimic the non-verbal communication of their counterparts. Even though mirroring is a complex phenomenon, in this study, we focus on the detection of head-nodding as a simple non-verbal communication cue due to its significance as a gesture displayed during social interactions. This paper introduces a computer vision-based method to detect mirroring through the analysis of head gestures using wearable cameras (smart glasses). In addition, we study how such a method can be used to explore perceived competence. The proposed method has been evaluated and the experiments demonstrate how static and wearable cameras seem to be equally effective to gather the information required for the analysis.
Address (up) Santiago de Chile; December 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes LAMP; 600.068; 600.072; Approved no
Call Number Admin @ si @ TRM2015 Serial 2722
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Sebastian Ramos; David Vazquez; Antonio Lopez; Jaume Amores
Title Spatiotemporal Stacked Sequential Learning for Pedestrian Detection Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume Issue Pages 3-12
Keywords SSL; Pedestrian Detection
Abstract Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.
Address (up) Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area ACDC Expedition Conference IbPRIA
Notes ADAS; 600.057; 600.054; 600.076 Approved no
Call Number GRV2015; ADAS @ adas @ GRV2015 Serial 2454
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez
Title 3D-Guided Multiscale Sliding Window for Pedestrian Detection Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 560-568
Keywords Pedestrian Detection
Abstract The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy.
Address (up) Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area ACDC Expedition Conference IbPRIA
Notes ADAS; 600.076; 600.057; 600.054 Approved no
Call Number ADAS @ adas @ GVR2015 Serial 2585
Permanent link to this record
 

 
Author Marc Bolaños; Maite Garolera; Petia Radeva
Title Object Discovery using CNN Features in Egocentric Videos Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 67-74
Keywords Object discovery; Egocentric videos; Lifelogging; CNN
Abstract Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach.
Address (up) Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MILAB Approved no
Call Number Admin @ si @ BGR2015 Serial 2596
Permanent link to this record
 

 
Author Estefania Talavera; Mariella Dimiccoli; Marc Bolaños; Maedeh Aghaei; Petia Radeva
Title R-clustering for egocentric video segmentation Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 327-336
Keywords Temporal video segmentation; Egocentric videos; Clustering
Abstract In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering their frames, we combine the clustering with a concept drift detection technique (ADWIN) that has rigorous guarantee of performances. ADWIN serves as a statistical upper bound for the clustering-based video segmentation. We integrate both techniques in an energy-minimization framework that serves to disambiguate the decision of both techniques and to complete the segmentation taking into account the temporal continuity of video frames descriptors. We present experiments over egocentric sets of more than 13.000 images acquired with different wearable cameras, showing that our method outperforms state-of-the-art clustering methods.
Address (up) Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MILAB Approved no
Call Number Admin @ si @ TDB2015 Serial 2597
Permanent link to this record
 

 
Author Onur Ferhat; Arcadi Llanza; Fernando Vilariño
Title A Feature-Based Gaze Estimation Algorithm for Natural Light Scenarios Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 569-576
Keywords Eye tracking; Gaze estimation; Natural light; Webcam
Abstract We present an eye tracking system that works with regular webcams. We base our work on open source CVC Eye Tracker [7] and we propose a number of improvements and a novel gaze estimation method. The new method uses features extracted from iris segmentation and it does not fall into the traditional categorization of appearance–based/model–based methods. Our experiments show that our approach reduces the gaze estimation errors by 34 % in the horizontal direction and by 12 % in the vertical direction compared to the baseline system.
Address (up) Santiago de Compostela; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MV;SIAI Approved no
Call Number Admin @ si @ FLV2015a Serial 2646
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny
Title A Sliding Window Framework for Word Spotting Based on Word Attributes Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 652-661
Keywords Word spotting; Sliding window; Word attributes
Abstract In this paper we propose a segmentation-free approach to word spotting. Word images are first encoded into feature vectors using Fisher Vector. Then, these feature vectors are used together with pyramidal histogram of characters labels (PHOC) to learn SVM-based attribute models. Documents are represented by these PHOC based word attributes. To efficiently compute the word attributes over a sliding window, we propose to use an integral image representation of the document using a simplified version of the attribute model. Finally we re-rank the top word candidates using the more discriminative full version of the word attributes. We show state-of-the-art results for segmentation-free query-by-example word spotting in single-writer and multi-writer standard datasets.
Address (up) Santiago de Compostela; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ GhV2015b Serial 2716
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez
Title Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection Type Conference Article
Year 2015 Publication IEEE Intelligent Vehicles Symposium IV2015 Abbreviated Journal
Volume Issue Pages 356-361
Keywords Pedestrian Detection
Abstract Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy.
Address (up) Seoul; Corea; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area ACDC Expedition Conference IV
Notes ADAS; 600.076; 600.057; 600.054 Approved no
Call Number ADAS @ adas @ GVX2015 Serial 2625
Permanent link to this record
 

 
Author Fernando Vilariño
Title Computer Vision and Performing Arts Type Conference Article
Year 2015 Publication Korean Scholars of Marketing Science Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (up) Seoul; Korea; October 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference KAMS
Notes MV;SIAI Approved no
Call Number Admin @ si @Vil2015 Serial 2799
Permanent link to this record