|   | 
Details
   web
Records
Author Adriana Romero
Title Assisting the training of deep neural networks with applications to computer vision Type Book Whole
Year 2015 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Deep learning has recently been enjoying an increasing popularity due to its success in solving challenging tasks. In particular, deep learning has proven to be effective in a large variety of computer vision tasks, such as image classification, object recognition and image parsing. Contrary to previous research, which required engineered feature representations, designed by experts, in order to succeed, deep learning attempts to learn representation hierarchies automatically from data. More recently, the trend has been to go deeper with representation hierarchies.
Learning (very) deep representation hierarchies is a challenging task, which
involves the optimization of highly non-convex functions. Therefore, the search
for algorithms to ease the learning of (very) deep representation hierarchies from data is extensive and ongoing.
In this thesis, we tackle the challenging problem of easing the learning of (very) deep representation hierarchies. We present a hyper-parameter free, off-the-shelf, simple and fast unsupervised algorithm to discover hidden structure from the input data by enforcing a very strong form of sparsity. We study the applicability and potential of the algorithm to learn representations of varying depth in a handful of applications and domains, highlighting the ability of the algorithm to provide discriminative feature representations that are able to achieve top performance.
Yet, while emphasizing the great value of unsupervised learning methods when
labeled data is scarce, the recent industrial success of deep learning has revolved around supervised learning. Supervised learning is currently the focus of many recent research advances, which have shown to excel at many computer vision tasks. Top performing systems often involve very large and deep models, which are not well suited for applications with time or memory limitations. More in line with the current trends, we engage in making top performing models more efficient, by designing very deep and thin models. Since training such very deep models still appears to be a challenging task, we introduce a novel algorithm that guides the training of very thin and deep models by hinting their intermediate representations.
Very deep and thin models trained by the proposed algorithm end up extracting feature representations that are comparable or even better performing
than the ones extracted by large state-of-the-art models, while compellingly
reducing the time and memory consumption of the model.
Address October 2015
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Carlo Gatta;Petia Radeva
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ Rom2015 Serial 2707
Permanent link to this record
 

 
Author Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva
Title Multi-Face Tracking by Extended Bag-of-Tracklets in Egocentric Videos Type Miscellaneous
Year 2015 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Egocentric images offer a hands-free way to record daily experiences and special events, where social interactions are of special interest. A natural question that arises is how to extract and track the appearance of multiple persons in a social event captured by a wearable camera. In this paper, we propose a novel method to find correspondences of multiple-faces in low temporal resolution egocentric sequences acquired through a wearable camera. This kind of sequences imposes additional challenges to the multitracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution (2 fpm), abrupt changes in the field of view, in illumination conditions and in the target location are very frequent. To overcome such a difficulty, we propose to generate, for each detected face, a set of correspondences along the whole sequence that we call tracklet and to take advantage of their redundancy to deal with both false positive face detections and unreliable tracklets. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which are aimed to correspond to specific persons. Finally, a prototype tracklet is extracted for each eBoT. We validated our method over a dataset of 18.000 images from 38 egocentric sequences with 52 trackable persons and compared to the state-of-the-art methods, demonstrating its effectiveness and robustness.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ ADR2015b Serial 2713
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny
Title Query by String word spotting based on character bi-gram indexing Type Conference Article
Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal
Volume Issue Pages 881-885
Keywords
Abstract In this paper we propose a segmentation-free query by string word spotting method. Both the documents and query strings are encoded using a recently proposed word representa- tion that projects images and strings into a common atribute space based on a pyramidal histogram of characters(PHOC). These attribute models are learned using linear SVMs over the Fisher Vector representation of the images along with the PHOC labels of the corresponding strings. In order to search through the whole page, document regions are indexed per character bi- gram using a similar attribute representation. On top of that, we propose an integral image representation of the document using a simplified version of the attribute model for efficient computation. Finally we introduce a re-ranking step in order to boost retrieval performance. We show state-of-the-art results for segmentation-free query by string word spotting in single-writer and multi-writer standard datasets
Address Nancy; France; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference ICDAR
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ GhV2015a Serial 2715
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu; Alireza Bosaghzadeh
Title Facial expression recognition based on multi observations with application to social robotics Type Book Chapter
Year 2015 Publication Emotional and Facial Expressions: Recognition, Developmental Differences and Social Importance Abbreviated Journal
Volume Issue Pages 153-166
Keywords
Abstract Human-robot interaction is a hot topic nowadays in the social robotics
community. One crucial aspect is represented by the affective communication
which comes encoded through the facial expressions. In this chapter, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, viewand texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression.
Address
Corporate Author Thesis
Publisher Nova Science publishers Place of Publication Editor Bruce Flores
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference
Notes LAMP; Approved no
Call Number Admin @ si @ DRB2015 Serial 2720
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas
Title Evaluating Real-Time Mirroring of Head Gestures using Smart Glasses Type Conference Article
Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 452-460
Keywords
Abstract Mirroring occurs when one person tends to mimic the non-verbal communication of their counterparts. Even though mirroring is a complex phenomenon, in this study, we focus on the detection of head-nodding as a simple non-verbal communication cue due to its significance as a gesture displayed during social interactions. This paper introduces a computer vision-based method to detect mirroring through the analysis of head gestures using wearable cameras (smart glasses). In addition, we study how such a method can be used to explore perceived competence. The proposed method has been evaluated and the experiments demonstrate how static and wearable cameras seem to be equally effective to gather the information required for the analysis.
Address Santiago de Chile; December 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference ICCVW
Notes LAMP; 600.068; 600.072; Approved no
Call Number Admin @ si @ TRM2015 Serial 2722
Permanent link to this record
 

 
Author M. Campos-Taberner; Adriana Romero; Carlo Gatta; Gustavo Camps-Valls
Title Shared feature representations of LiDAR and optical images: Trading sparsity for semantic discrimination Type Conference Article
Year 2015 Publication IEEE International Geoscience and Remote Sensing Symposium IGARSS2015 Abbreviated Journal
Volume Issue Pages 4169 - 4172
Keywords
Abstract This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime spar-sity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB+LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive colored edge filters. The joint feature representation is also more discriminative when used for clustering and topological data visualization.
Address Milan; Italy; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference IGARSS
Notes LAMP; 600.079;MILAB Approved no
Call Number Admin @ si @ CRG2015 Serial 2724
Permanent link to this record
 

 
Author R. Bertrand; Oriol Ramos Terrades; P. Gomez-Kramer; P. Franco; Jean-Marc Ogier
Title A Conditional Random Field model for font forgery detection Type Conference Article
Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal
Volume Issue Pages 576 - 580
Keywords
Abstract Nowadays, document forgery is becoming a real issue. A large amount of documents that contain critical information as payment slips, invoices or contracts, are constantly subject to fraudster manipulation because of the lack of security regarding this kind of document. Previously, a system to detect fraudulent documents based on its intrinsic features has been presented. It was especially designed to retrieve copy-move forgery and imperfection due to fraudster manipulation. However, when a set of characters is not present in the original document, copy-move forgery is not feasible. Hence, the fraudster will use a text toolbox to add or modify information in the document by imitating the font or he will cut and paste characters from another document where the font properties are similar. This often results in font type errors. Thus, a clue to detect document forgery consists of finding characters, words or sentences in a document with font properties different from their surroundings. To this end, we present in this paper an automatic forgery detection method based on document font features. Using the Conditional Random Field a measurement of probability that a character belongs to a specific font is made by comparing the character font features to a knowledge database. Then, the character is classified as a genuine or a fake one by comparing its probability to belong to a certain font type with those of the neighboring characters.
Address Nancy; France; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference ICDAR
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ BRG2015 Serial 2725
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Josep Llados; David Fernandez; Cristina Cañero
Title Use case visual Bag-of-Words techniques for camera based identity document classification Type Conference Article
Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal
Volume Issue Pages 721 - 725
Keywords
Abstract Nowadays, automatic identity document recognition, including passport and driving license recognition, is at the core of many applications within the administrative and service sectors, such as police, hospitality, car renting, etc. In former years, the document information was manually extracted whereas today this data is recognized automatically from images obtained by flat-bed scanners. Yet, since these scanners tend to be expensive and voluminous, companies in the sector have recently turned their attention to cheaper, small and yet computationally powerful scanners: the mobile devices. The document identity recognition from mobile images enclose several new difficulties w.r.t traditional scanned images, such as the loss of a controlled background, perspective, blurring, etc. In this paper we present a real application for identity document classification of images taken from mobile devices. This classification process is of extreme importance since a prior knowledge of the document type and origin strongly facilitates the subsequent information extraction. The proposed method is based on a traditional Bagof-Words in which we have taken into consideration several key aspects to enhance recognition rate. The method performance has been studied on three datasets containing more than 2000 images from 129 different document classes.
Address Nancy; France; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference ICDAR
Notes DAG; 600.077; 600.061; Approved no
Call Number Admin @ si @ HRL2015a Serial 2726
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Josep Llados
Title Attributed Graph Grammar for floor plan analysis Type Conference Article
Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal
Volume Issue Pages 726 - 730
Keywords
Abstract In this paper, we propose the use of an Attributed Graph Grammar as unique framework to model and recognize the structure of floor plans. This grammar represents a building as a hierarchical composition of structurally and semantically related elements, where common representations are learned stochastically from annotated data. Given an input image, the parsing consists on constructing that graph representation that better agrees with the probabilistic model defined by the grammar. The proposed method provides several advantages with respect to the traditional floor plan analysis techniques. It uses an unsupervised statistical approach for detecting walls that adapts to different graphical notations and relaxes strong structural assumptions such are straightness and orthogonality. Moreover, the independence between the knowledge model and the parsing implementation allows the method to learn automatically different building configurations and thus, to cope the existing variability. These advantages are clearly demonstrated by comparing it with the most recent floor plan interpretation techniques on 4 datasets of real floor plans with different notations.
Address Nancy; France; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference ICDAR
Notes DAG; 600.077; 600.061 Approved no
Call Number Admin @ si @ HRL2015b Serial 2727
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Meritxell Joanpere; Nuria Gorgorio; Lluis Albarracin
Title Mathematics learning opportunities when playing a Tower Defense Game Type Journal
Year 2015 Publication International Journal of Serious Games Abbreviated Journal IJSG
Volume 2 Issue 4 Pages 57-71
Keywords Tower Defense game; learning opportunities; mathematics; problem solving; game design
Abstract A qualitative research study is presented herein with the purpose of identifying mathematics learning opportunities in students between 10 and 12 years old while playing a commercial version of a Tower Defense game. These learning opportunities are understood as mathematicisable moments of the game and involve the establishment of relationships between the game and mathematical problem solving. Based on the analysis of these mathematicisable moments, we conclude that the game can promote problem-solving processes and learning opportunities that can be associated with different mathematical contents that appears in mathematics curricula, thought it seems that teacher or new game elements might be needed to facilitate the processes.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ HJG2015 Serial 2730
Permanent link to this record
 

 
Author Gloria Fernandez Esparrach; Jorge Bernal; Cristina Rodriguez de Miguel; Debora Gil; Fernando Vilariño; Henry Cordova; Cristina Sanchez Montes; I.Araujo ; Maria Lopez Ceron; J.Llach; F. Javier Sanchez
Title Colonic polyps are correctly identified by a computer vision method using wm-dova energy maps Type Conference Article
Year 2015 Publication Proceedings of 23 United European- UEG Week 2015 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference UEG
Notes MV; IAM; 600.075;SIAI Approved no
Call Number Admin @ si @ FBR2015 Serial 2732
Permanent link to this record
 

 
Author Debora Gil; F. Javier Sanchez; Gloria Fernandez Esparrach; Jorge Bernal
Title 3D Stable Spatio-temporal Polyp Localization in Colonoscopy Videos Type Book Chapter
Year 2015 Publication Computer-Assisted and Robotic Endoscopy. Revised selected papers of Second International Workshop, CARE 2015, Held in Conjunction with MICCAI 2015 Abbreviated Journal
Volume 9515 Issue Pages 140-152
Keywords Colonoscopy, Polyp Detection, Polyp Localization, Region Extraction, Watersheds
Abstract Computational intelligent systems could reduce polyp miss rate in colonoscopy for colon cancer diagnosis and, thus, increase the efficiency of the procedure. One of the main problems of existing polyp localization methods is a lack of spatio-temporal stability in their response. We propose to explore the response of a given polyp localization across temporal windows in order to select
those image regions presenting the highest stable spatio-temporal response.
Spatio-temporal stability is achieved by extracting 3D watershed regions on the
temporal window. Stability in localization response is statistically determined by analysis of the variance of the output of the localization method inside each 3D region. We have explored the benefits of considering spatio-temporal stability in two different tasks: polyp localization and polyp detection. Experimental results indicate an average improvement of 21:5% in polyp localization and 43:78% in polyp detection.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference CARE
Notes IAM; MV; 600.075 Approved no
Call Number Admin @ si @ GSF2015 Serial 2733
Permanent link to this record
 

 
Author David Sanchez-Mendoza; David Masip; Agata Lapedriza
Title Emotion recognition from mid-level features Type Journal Article
Year 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 67 Issue Part 1 Pages 66–74
Keywords Facial expression; Emotion recognition; Action units; Computer vision
Abstract In this paper we present a study on the use of Action Units as mid-level features for automatically recognizing basic and subtle emotions. We propose a representation model based on mid-level facial muscular movement features. We encode these movements dynamically using the Facial Action Coding System, and propose to use these intermediate features based on Action Units (AUs) to classify emotions. AUs activations are detected fusing a set of spatiotemporal geometric and appearance features. The algorithm is validated in two applications: (i) the recognition of 7 basic emotions using the publicly available Cohn-Kanade database, and (ii) the inference of subtle emotional cues in the Newscast database. In this second scenario, we consider emotions that are perceived cumulatively in longer periods of time. In particular, we Automatically classify whether video shoots from public News TV channels refer to Good or Bad news. To deal with the different video lengths we propose a Histogram of Action Units and compute it using a sliding window strategy on the frame sequences. Our approach achieves accuracies close to human perception.
Address
Corporate Author Thesis
Publisher Elsevier B.V. Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0167-8655 ISBN (down) Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number Admin @ si @ SML2015 Serial 2746
Permanent link to this record
 

 
Author Joan M. Nuñez; Jorge Bernal; F. Javier Sanchez; Fernando Vilariño
Title Growing Algorithm for Intersection Detection (GRAID) in branching patterns Type Journal Article
Year 2015 Publication Machine Vision and Applications Abbreviated Journal MVAP
Volume 26 Issue 2 Pages 387-400
Keywords Bifurcation ; Crossroad; Intersection ;Retina ; Vessel
Abstract Analysis of branching structures represents a very important task in fields such as medical diagnosis, road detection or biometrics. Detecting intersection landmarks Becomes crucial when capturing the structure of a branching pattern. We present a very simple geometrical model to describe intersections in branching structures based on two conditions: Bounded Tangency condition (BT) and Shortest Branch (SB) condition. The proposed model precisely sets a geometrical characterization of intersections and allows us to introduce a new unsupervised operator for intersection extraction. We propose an implementation that handles the consequences of digital domain operation that,unlike existing approaches, is not restricted to a particular scale and does not require the computation of the thinned pattern. The new proposal, as well as other existing approaches in the bibliography, are evaluated in a common framework for the first time. The performance analysis is based on two manually segmented image data sets: DRIVE retinal image database and COLON-VESSEL data set, a newly created data set of vascular content in colonoscopy frames. We have created an intersection landmark ground truth for each data set besides comparing our method in the only existing ground truth. Quantitative results confirm that we are able to outperform state-of-the-art performancelevels with the advantage that neither training nor parameter tuning is needed.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference
Notes ;SIAI Approved no
Call Number Admin @ si @MBS2015 Serial 2777
Permanent link to this record
 

 
Author Jordina Torrents-Barrena; Aida Valls; Petia Radeva; Meritxell Arenas; Domenec Puig
Title Automatic Recognition of Molecular Subtypes of Breast Cancer in X-Ray images using Segmentation-based Fractal Texture Analysis Type Book Chapter
Year 2015 Publication Artificial Intelligence Research and Development Abbreviated Journal
Volume 277 Issue Pages 247 - 256
Keywords
Abstract Breast cancer disease has recently been classified into four subtypes regarding the molecular properties of the affected tumor region. For each patient, an accurate diagnosis of the specific type is vital to decide the most appropriate therapy in order to enhance life prospects. Nowadays, advanced therapeutic diagnosis research is focused on gene selection methods, which are not robust enough. Hence, we hypothesize that computer vision algorithms can offer benefits to address the problem of discriminating among them through X-Ray images. In this paper, we propose a novel approach driven by texture feature descriptors and machine learning techniques. First, we segment the tumour part through an active contour technique and then, we perform a complete fractal analysis to collect qualitative information of the region of interest in the feature extraction stage. Finally, several supervised and unsupervised classifiers are used to perform multiclass classification of the aforementioned data. The experimental results presented in this paper support that it is possible to establish a relation between each tumor subtype and the extracted features of the patterns revealed on mammograms.
Address
Corporate Author Thesis
Publisher IOS Press Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Frontiers in Artificial Intelligence and Applications Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN (down) Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @TVR2015 Serial 2780
Permanent link to this record