|   | 
Details
   web
Records
Author Thanh Nam Le; Muhammad Muzzamil Luqman; Anjan Dutta; Pierre Heroux; Christophe Rigaud; Clement Guerin; Pasquale Foggia; Jean Christophe Burie; Jean Marc Ogier; Josep Llados; Sebastien Adam
Title Subgraph spotting in graph representations of comic book images Type Journal Article
Year 2018 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 112 Issue Pages 118-124
Keywords Attributed graph; Region adjacency graph; Graph matching; Graph isomorphism; Subgraph isomorphism; Subgraph spotting; Graph indexing; Graph retrieval; Query by example; Dataset and comic book images
Abstract Graph-based representations are the most powerful data structures for extracting, representing and preserving the structural information of underlying data. Subgraph spotting is an interesting research problem, especially for studying and investigating the structural information based content-based image retrieval (CBIR) and query by example (QBE) in image databases. In this paper we address the problem of lack of freely available ground-truthed datasets for subgraph spotting and present a new dataset for subgraph spotting in graph representations of comic book images (SSGCI) with its ground-truth and evaluation protocol. Experimental results of two state-of-the-art methods of subgraph spotting are presented on the new SSGCI dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.097; 600.121 Approved no
Call Number Admin @ si @ LLD2018 Serial 3150
Permanent link to this record
 

 
Author Sounak Dey; Anjan Dutta; Suman Ghosh; Ernest Valveny; Josep Llados
Title Aligning Salient Objects to Queries: A Multi-modal and Multi-object Image Retrieval Framework Type Conference Article
Year 2018 Publication 14th Asian Conference on Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract In this paper we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sketches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model learned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every dataset.
Address Perth; Australia; December 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACCV
Notes DAG; 600.097; 600.121; 600.129 Approved no
Call Number Admin @ si @ DDG2018a Serial 3151
Permanent link to this record
 

 
Author Sounak Dey; Anjan Dutta; Suman Ghosh; Ernest Valveny; Josep Llados; Umapada Pal
Title Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch Type Conference Article
Year 2018 Publication 24th International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 916 - 921
Keywords
Abstract In this work we introduce a cross modal image retrieval system that allows both text and sketch as input modalities for the query. A cross-modal deep network architecture is formulated to jointly model the sketch and text input modalities as well as the the image output modality, learning a common embedding between text and images and between sketches and images. In addition, an attention model is used to selectively focus the attention on the different objects of the image, allowing for retrieval with multiple objects in the query. Experiments show that the proposed method performs the best in both single and multiple object image retrieval in standard datasets.
Address Beijing; China; August 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes DAG; 602.167; 602.168; 600.097; 600.084; 600.121; 600.129 Approved no
Call Number Admin @ si @ DDG2018b Serial 3152
Permanent link to this record
 

 
Author Fernando Vilariño; Dimosthenis Karatzas; Alberto Valcarce
Title The Library Living Lab Barcelona: A participative approach to technology as an enabling factor for innovation in cultural spaces Type Journal
Year 2018 Publication Technology Innovation Management Review Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; MV; 600.097; 600.121; 600.129;SIAI Approved no
Call Number Admin @ si @ VKV2018a Serial 3153
Permanent link to this record
 

 
Author Fernando Vilariño; Dimosthenis Karatzas; Alberto Valcarce
Title Libraries as New Innovation Hubs: The Library Living Lab Type Conference Article
Year 2018 Publication 30th ISPIM Innovation Conference Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Libraries are in deep transformation both in EU and around the world, and they are thriving within a great window of opportunity for innovation. In this paper, we show how the Library Living Lab in Barcelona participated of this changing scenario and contributed to create the Bibliolab program, where more than 200 public libraries give voice to their users in a global user-centric innovation initiative, using technology as enabling factor. The Library Living Lab is a real 4-helix implementation where Universities, Research Centers, Public Administration, Companies and the Neighbors are joint together to explore how technology transforms the cultural experience of people. This case is an example of scalability and provides reference tools for policy making, sustainability, user engage methodologies and governance. We provide specific examples of new prototypes and services that help to understand how to redefine the role of the Library as a real hub for social innovation.
Address Stockholm; May 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ISPIM
Notes DAG; MV; 600.097; 600.121; 600.129;SIAI Approved no
Call Number Admin @ si @ VKV2018b Serial 3154
Permanent link to this record
 

 
Author Abel Gonzalez-Garcia; Joost Van de Weijer; Yoshua Bengio
Title Image-to-image translation for cross-domain disentanglement Type Conference Article
Year 2018 Publication 32nd Annual Conference on Neural Information Processing Systems Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Montreal; Canada; December 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference NIPS
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ GWB2018 Serial 3155
Permanent link to this record
 

 
Author Marc Masana; Idoia Ruiz; Joan Serrat; Joost Van de Weijer; Antonio Lopez
Title Metric Learning for Novelty and Anomaly Detection Type Conference Article
Year 2018 Publication 29th British Machine Vision Conference Abbreviated Journal
Volume Issue Pages
Keywords
Abstract When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently. The capability to detect out-of-distribution images is therefore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection ---images of classes which are not in the training set but are related to those---, and anomaly detection ---images with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as traffic sign recognition, obtaining comparable or better results than previous works.
Address Newcastle; uk; September 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes LAMP; ADAS; 601.305; 600.124; 600.106; 602.200; 600.120; 600.118 Approved no
Call Number Admin @ si @ MRS2018 Serial 3156
Permanent link to this record
 

 
Author Marco Buzzelli; Joost Van de Weijer; Raimondo Schettini
Title Learning Illuminant Estimation from Object Recognition Type Conference Article
Year 2018 Publication 25th International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 3234 - 3238
Keywords Illuminant estimation; computational color constancy; semi-supervised learning; deep learning; convolutional neural networks
Abstract In this paper we present a deep learning method to estimate the illuminant of an image. Our model is not trained with illuminant annotations, but with the objective of improving performance on an auxiliary task such as object recognition. To the best of our knowledge, this is the first example of a deep
learning architecture for illuminant estimation that is trained without ground truth illuminants. We evaluate our solution on standard datasets for color constancy, and compare it with state of the art methods. Our proposal is shown to outperform most deep learning methods in a cross-dataset evaluation
setup, and to present competitive results in a comparison with parametric solutions.
Address Athens; Greece; October 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes LAMP; 600.109; 600.120 Approved no
Call Number Admin @ si @ BWS2018 Serial 3157
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Matthieu Molinier; Jorma Laaksonen
Title Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification Type Journal Article
Year 2018 Publication ISPRS Journal of Photogrammetry and Remote Sensing Abbreviated Journal ISPRS J
Volume 138 Issue Pages 74-85
Keywords Remote sensing; Deep learning; Scene classification; Local Binary Patterns; Texture analysis
Abstract Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.109; 600.106; 600.120 Approved no
Call Number Admin @ si @ RKW2018 Serial 3158
Permanent link to this record
 

 
Author Xialei Liu; Joost Van de Weijer; Andrew Bagdanov
Title Leveraging Unlabeled Data for Crowd Counting by Learning to Rank Type Conference Article
Year 2018 Publication 31st IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 7661 - 7669
Keywords Task analysis; Training; Computer vision; Visualization; Estimation; Head; Context modeling
Abstract We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework. To induce a ranking of
cropped images , we use the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. This allows us to address the problem of limited size of existing
datasets for crowd counting. We collect two crowd scene datasets from Google using keyword searches and queryby-example image retrieval, respectively. We demonstrate how to efficiently learn from these unlabeled datasets by incorporating learning-to-rank in a multi-task network which simultaneously ranks images and estimates crowd density maps. Experiments on two of the most challenging crowd counting datasets show that our approach obtains state-ofthe-art results.
Address Salt Lake City; USA; June 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes LAMP; 600.109; 600.106; 600.120 Approved no
Call Number Admin @ si @ LWB2018 Serial 3159
Permanent link to this record
 

 
Author Xialei Liu; Marc Masana; Luis Herranz; Joost Van de Weijer; Antonio Lopez; Andrew Bagdanov
Title Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting Type Conference Article
Year 2018 Publication 24th International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 2262-2268
Keywords
Abstract In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios. Our technique is based on a network reparameterization that approximately diagonalizes the Fisher Information Matrix of the network parameters. This reparameterization takes the form of
a factorized rotation of parameter space which, when used in conjunction with Elastic Weight Consolidation (which assumes a diagonal Fisher Information Matrix), leads to significantly better performance on lifelong learning of sequential tasks. Experimental results on the MNIST, CIFAR-100, CUB-200 and
Stanford-40 datasets demonstrate that we significantly improve the results of standard elastic weight consolidation, and that we obtain competitive results when compared to the state-of-the-art in lifelong learning without forgetting.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes LAMP; ADAS; 601.305; 601.109; 600.124; 600.106; 602.200; 600.120; 600.118 Approved no
Call Number Admin @ si @ LMH2018 Serial 3160
Permanent link to this record
 

 
Author Vacit Oguz Yazici; Joost Van de Weijer; Arnau Ramisa
Title Color Naming for Multi-Color Fashion Items Type Conference Article
Year 2018 Publication 6th World Conference on Information Systems and Technologies Abbreviated Journal
Volume 747 Issue Pages 64-73
Keywords Deep learning; Color; Multi-label
Abstract There exists a significant amount of research on color naming of single colored objects. However in reality many fashion objects consist of multiple colors. Currently, searching in fashion datasets for multi-colored objects can be a laborious task. Therefore, in this paper we focus on color naming for images with multi-color fashion items. We collect a dataset, which consists of images which may have from one up to four colors. We annotate the images with the 11 basic colors of the English language. We experiment with several designs for deep neural networks with different losses. We show that explicitly estimating the number of colors in the fashion item leads to improved results.
Address Naples; March 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WORLDCIST
Notes LAMP; 600.109; 601.309; 600.120 Approved no
Call Number Admin @ si @ YWR2018 Serial 3161
Permanent link to this record
 

 
Author Felipe Codevilla; Antonio Lopez; Vladlen Koltun; Alexey Dosovitskiy
Title On Offline Evaluation of Vision-based Driving Models Type Conference Article
Year 2018 Publication 15th European Conference on Computer Vision Abbreviated Journal
Volume 11219 Issue Pages 246-262
Keywords Autonomous driving; deep learning
Abstract Autonomous driving models should ideally be evaluated by deploying
them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and
suitable offline metrics.
Address Munich; September 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCV
Notes ADAS; 600.124; 600.118 Approved no
Call Number Admin @ si @ CLK2018 Serial 3162
Permanent link to this record
 

 
Author F. Javier Sanchez; Jorge Bernal
Title Use of Software Tools for Real-time Monitoring of Learning Processes: Application to Compilers subject Type Conference Article
Year 2018 Publication 4th International Conference of Higher Education Advances Abbreviated Journal
Volume Issue Pages 1359-1366
Keywords Monitoring; Evaluation tool; Gamification; Student motivation
Abstract The effective implementation of the Higher European Education Area has meant a change regarding the focus of the learning process, being now the student at its very center. This shift of focus requires a strong involvement and fluent communication between teachers and students to succeed. Considering the difficulties associated to motivate students to take a more active role in the learning process, we explore how the use of a software tool can help both actors to improve the learning experience. We present a tool that can help students to obtain instantaneous feedback with respect to their progress in the subject as well as providing teachers with useful information about the evolution of knowledge acquisition with respect to each of the subject areas. We compare the performance achieved by students in two academic years: results show an improvement in overall performance which, after observing graphs provided by our tool, can be associated to an increase in students interest in the subject.
Address Valencia; June 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference HEAD
Notes MV; no proj Approved no
Call Number Admin @ si @ SaB2018 Serial 3165
Permanent link to this record
 

 
Author Lei Kang; Juan Ignacio Toledo; Pau Riba; Mauricio Villegas; Alicia Fornes; Marçal Rusiñol
Title Convolve, Attend and Spell: An Attention-based Sequence-to-Sequence Model for Handwritten Word Recognition Type Conference Article
Year 2018 Publication 40th German Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 459-472
Keywords
Abstract This paper proposes Convolve, Attend and Spell, an attention based sequence-to-sequence model for handwritten word recognition. The proposed architecture has three main parts: an encoder, consisting of a CNN and a bi-directional GRU, an attention mechanism devoted to focus on the pertinent features and a decoder formed by a one-directional GRU, able to spell the corresponding word, character by character. Compared with the recent state-of-the-art, our model achieves competitive results on the IAM dataset without needing any pre-processing step, predefined lexicon nor language model. Code and additional results are available in https://github.com/omni-us/research-seq2seq-HTR.
Address Stuttgart; Germany; October 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference GCPR
Notes DAG; 600.097; 603.057; 302.065; 601.302; 600.084; 600.121; 600.129 Approved no
Call Number Admin @ si @ KTR2018 Serial 3167
Permanent link to this record