toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Fernando Vilariño edit  openurl
  Title Bringing and keeping all the stakeholders together: creating a catalog of models of governance for innovation Type Miscellaneous
  Year 2017 Publication Open Living Lab Days Report Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) Krakow; August 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; no menciona;SIAI Approved no  
  Call Number Admin @ si @ Vil2017b Serial 3033  
Permanent link to this record
 

 
Author Simon Jégou; Michal Drozdzal; David Vazquez; Adriana Romero; Yoshua Bengio edit   pdf
url  doi
openurl 
  Title The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation Type Conference Article
  Year 2017 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords Semantic Segmentation  
  Abstract State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.

Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.

In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.
 
  Address (down) Honolulu; USA; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MILAB; ADAS; 600.076; 600.085; 601.281 Approved no  
  Call Number ADAS @ adas @ JDV2016 Serial 2866  
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
doi  openurl
  Title Infrared Image Colorization based on a Triplet DCGAN Architecture Type Conference Article
  Year 2017 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper proposes a novel approach for colorizing near infrared (NIR) images using Deep Convolutional Generative Adversarial Network (GAN) architectures. The proposed approach is based on the usage of a triplet model for learning each color channel independently, in a more homogeneous way. It allows a fast convergence during the training, obtaining a greater similarity between the given NIR image and the corresponding ground truth. The proposed approach has been evaluated with a large data set of NIR images and compared with a recent approach, which is also based on a GAN architecture but in this case all the
color channels are obtained at the same time.
 
  Address (down) Honolulu; Hawaii; USA; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.086; 600.118 Approved no  
  Call Number Admin @ si @ SSV2017b Serial 2920  
Permanent link to this record
 

 
Author Lluis Gomez; Y. Patel; Marçal Rusiñol; C.V. Jawahar; Dimosthenis Karatzas edit   pdf
url  doi
openurl 
  Title Self‐supervised learning of visual features through embedding images into text topic spaces Type Conference Article
  Year 2017 Publication 30th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.  
  Address (down) Honolulu; Hawaii; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes DAG; 600.084; 600.121 Approved no  
  Call Number Admin @ si @ GPR2017 Serial 2889  
Permanent link to this record
 

 
Author Cesar de Souza; Adrien Gaidon; Yohann Cabon; Antonio Lopez edit   pdf
doi  openurl
  Title Procedural Generation of Videos to Train Deep Action Recognition Networks Type Conference Article
  Year 2017 Publication 30th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2594-2604  
  Keywords  
  Abstract Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections. In this work, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for ”Procedural Human Action Videos”. It contains a total of 39, 982 videos, with more than 1, 000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We introduce a deep multi-task representation learning architecture to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF101 and HMDB51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, significantly
outperforming fine-tuning state-of-the-art unsupervised generative models of videos.
 
  Address (down) Honolulu; Hawaii; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes ADAS; 600.076; 600.085; 600.118 Approved no  
  Call Number Admin @ si @ SGC2017 Serial 3051  
Permanent link to this record
 

 
Author Lasse Martensson; Anders Hast; Alicia Fornes edit   pdf
isbn  openurl
  Title Word Spotting as a Tool for Scribal Attribution Type Conference Article
  Year 2017 Publication 2nd Conference of the association of Digital Humanities in the Nordic Countries Abbreviated Journal  
  Volume Issue Pages 87-89  
  Keywords  
  Abstract  
  Address (down) Gothenburg; Suecia; March 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-91-88348-83-8 Medium  
  Area Expedition Conference DHN  
  Notes DAG; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ MHF2017 Serial 2954  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
doi  openurl
  Title Tex-Nets: Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition Type Conference Article
  Year 2017 Publication 19th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages  
  Keywords Convolutional Neural Networks; Texture Recognition; Local Binary Paterns  
  Abstract Recognizing materials and textures in realistic imaging conditions is a challenging computer vision problem. For many years, local features based orderless representations were a dominant approach for texture recognition. Recently deep local features, extracted from the intermediate layers of a Convolutional Neural Network (CNN), are used as filter banks. These dense local descriptors from a deep model, when encoded with Fisher Vectors, have shown to provide excellent results for texture recognition. The CNN models, employed in such approaches, take RGB patches as input and train on a large amount of labeled images. We show that CNN models, which we call TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard deep models trained on RGB patches. We further investigate two deep architectures, namely early and late fusion, to combine the texture and color information. Experiments on benchmark texture datasets clearly demonstrate that TEX-Nets provide complementary information to standard RGB deep network. Our approach provides a large gain of 4.8%, 3.5%, 2.6% and 4.1% respectively in accuracy on the DTD, KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets, compared to the standard RGB network of the same architecture. Further, our final combination leads to consistent improvements over the state-of-the-art on all four datasets.  
  Address (down) Glasgow; Scothland; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ACM  
  Notes LAMP; 600.109; 600.068; 600.120 Approved no  
  Call Number Admin @ si @ RKW2017 Serial 3038  
Permanent link to this record
 

 
Author Arash Akbarinia; Karl R. Gegenfurtner edit  doi
openurl 
  Title Metameric Mismatching in Natural and Artificial Reflectances Type Journal Article
  Year 2017 Publication Journal of Vision Abbreviated Journal JV  
  Volume 17 Issue 10 Pages 390-390  
  Keywords Metamer; colour perception; spectral discrimination; photoreceptors  
  Abstract The human visual system and most digital cameras sample the continuous spectral power distribution through three classes of receptors. This implies that two distinct spectral reflectances can result in identical tristimulus values under one illuminant and differ under another – the problem of metamer mismatching. It is still debated how frequent this issue arises in the real world, using naturally occurring reflectance functions and common illuminants.

We gathered more than ten thousand spectral reflectance samples from various sources, covering a wide range of environments (e.g., flowers, plants, Munsell chips) and evaluated their responses under a number of natural and artificial source of lights. For each pair of reflectance functions, we estimated the perceived difference using the CIE-defined distance ΔE2000 metric in Lab color space.

The degree of metamer mismatching depended on the lower threshold value l when two samples would be considered to lead to equal sensor excitations (ΔE < l), and on the higher threshold value h when they would be considered different. For example, for l=h=1, we found that 43.129 comparisons out of a total of 6×107 pairs would be considered metameric (1 in 104). For l=1 and h=5, this number reduced to 705 metameric pairs (2 in 106). Extreme metamers, for instance l=1 and h=10, were rare (22 pairs or 6 in 108), as were instances where the two members of a metameric pair would be assigned to different color categories. Not unexpectedly, we observed variations among different reflectance databases and illuminant spectra with more frequency under artificial illuminants than natural ones.

Overall, our numbers are not very different from those obtained earlier (Foster et al, JOSA A, 2006). However, our results also show that the degree of metamerism is typically not very strong and that category switches hardly ever occur.
 
  Address (down) Florida, USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; no menciona Approved no  
  Call Number Admin @ si @ AkG2017 Serial 2899  
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; Oliver Valero; G. Fonseka; M. Lawrie; Francesca Vidal; Zaida Sarrate edit   pdf
openurl 
  Title Chromosome Territories in Mice Spermatogenesis: A new three-dimensional methodology of study Type Conference Article
  Year 2017 Publication 11th European CytoGenesis Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) Florencia; Italia; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECA  
  Notes IAM; 600.096; 600.145 Approved no  
  Call Number Admin @ si @ SBG2017a Serial 2936  
Permanent link to this record
 

 
Author Veronica Romero; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez edit   pdf
isbn  openurl
  Title Information Extraction in Handwritten Marriage Licenses Books Using the MGGI Methodology Type Conference Article
  Year 2017 Publication 8th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 10255 Issue Pages 287-294  
  Keywords Handwritten Text Recognition; Information extraction; Language modeling; MGGI; Categories-based language model  
  Abstract Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demographic and genealogical research. For example, marriage license books have been used for centuries by ecclesiastical and secular institutions to register marriages. These books follow a simple structure of the text in the records with a evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. In previous works we studied the use of category-based language models and how a Grammatical Inference technique known as MGGI could improve the accuracy of these tasks. In this work we analyze the main causes of the semantic errors observed in previous results and apply a better implementation of the MGGI technique to solve these problems. Using the resulting language model, transcription and information extraction experiments have been carried out, and the results support our proposed approach.  
  Address (down) Faro; Portugal; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor L.A. Alexandre; J.Salvador Sanchez; Joao M. F. Rodriguez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-319-58837-7 Medium  
  Area Expedition Conference IbPRIA  
  Notes DAG; 602.006; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ RFV2017 Serial 2952  
Permanent link to this record
 

 
Author Marc Bolaños; Alvaro Peris; Francisco Casacuberta; Petia Radeva edit   pdf
doi  openurl
  Title VIBIKNet: Visual Bidirectional Kernelized Network for Visual Question Answering Type Conference Article
  Year 2017 Publication 8th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume Issue Pages  
  Keywords Visual Qestion Aswering; Convolutional Neural Networks; Long short-term memory networks  
  Abstract In this paper, we address the problem of visual question answering by proposing a novel model, called VIBIKNet. Our model is based on integrating Kernelized Convolutional Neural Networks and Long-Short Term Memory units to generate an answer given a question about an image. We prove that VIBIKNet is an optimal trade-off between accuracy and computational load, in terms of memory and time consumption. We validate our method on the VQA challenge dataset and compare it to the top performing methods in order to illustrate its performance and speed.  
  Address (down) Faro; Portugal; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IbPRIA  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ BPC2017 Serial 2939  
Permanent link to this record
 

 
Author Hana Jarraya; Oriol Ramos Terrades; Josep Llados edit   pdf
url  openurl
  Title Graph Embedding through Probabilistic Graphical Model applied to Symbolic Graphs Type Conference Article
  Year 2017 Publication 8th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume Issue Pages  
  Keywords Attributed Graph; Probabilistic Graphical Model; Graph Embedding; Structured Support Vector Machines  
  Abstract We propose a new Graph Embedding (GEM) method that takes advantages of structural pattern representation. It models an Attributed Graph (AG) as a Probabilistic Graphical Model (PGM). Then, it learns the parameters of this PGM presented by a vector. This vector is a signature of AG in a lower dimensional vectorial space. We apply Structured Support Vector Machines (SSVM) to process classification task. As first tentative, results on the GREC dataset are encouraging enough to go further on this direction.  
  Address (down) Faro; Portugal; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IbPRIA  
  Notes DAG; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ JRL2017a Serial 2953  
Permanent link to this record
 

 
Author Laura Lopez-Fuentes; Joost Van de Weijer; Marc Bolaños; Harald Skinnemoen edit   pdf
openurl 
  Title Multi-modal Deep Learning Approach for Flood Detection Type Conference Article
  Year 2017 Publication MediaEval Benchmarking Initiative for Multimedia Evaluation Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the
method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task.
 
  Address (down) Dublin; Ireland; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MediaEval  
  Notes LAMP; 600.084; 600.109; 600.120 Approved no  
  Call Number Admin @ si @ LWB2017a Serial 2974  
Permanent link to this record
 

 
Author Fernando Vilariño edit  openurl
  Title Citizen experience as a powerful communication tool: Open Innovation and the role of Living Labs in EU Type Conference Article
  Year 2017 Publication European Conference of Science Journalists Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The Open Innovation 2.0 model spearheaded by the European Commission introduces conceptual changes in how innovation processes should be developed. The notion of an innovation ecosystem, and the active participation of the citizens (and all the different actors of the quadruple helix) in innovation processes, opens up new channels for scientific communication, where the citizens (and all actors) can be naturally reached and facilitate the spread of the scientific message in their communities. Unleashing the power of such mechanisms, while maintaining control over the scientific communication done through such channels presents an opportunity and a challenge at the same time.

This workshop will look into key concepts that the Open Innovation 2.0 EU model introduces, and what new opportunities for communication they bring about. Specifically, we will focus on Living Labs, as a key instrument for implementing this innovation model at the regional level, and their potential in creating scientific dissemination spaces.
 
  Address (down) Copenhagen; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECSJ  
  Notes MV; 600.097;SIAI Approved no  
  Call Number Admin @ si @ Vil2017a Serial 3032  
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
openurl 
  Title Colorizing Infrared Images through a Triplet Conditional DCGAN Architecture Type Conference Article
  Year 2017 Publication 19th international conference on image analysis and processing Abbreviated Journal  
  Volume Issue Pages  
  Keywords CNN in Multispectral Imaging; Image Colorization  
  Abstract This paper focuses on near infrared (NIR) image colorization by using a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN) architecture model. The proposed architecture is based on the usage of a conditional probabilistic generative model. Firstly, it learns to colorize the given input image, by using a triplet model architecture that tackle every channel in an independent way. In the proposed model, the nal layer of red channel consider the infrared image to enhance the details, resulting in a sharp RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. Experimental results with a large set of real images are provided showing the validity of the proposed approach. Additionally, the proposed approach is compared with a state of the art approach showing better results.  
  Address (down) Catania; Italy; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIAP  
  Notes ADAS; MSIAU; 600.086; 600.122; 600.118 Approved no  
  Call Number Admin @ si @ SSV2017c Serial 3016  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: