|   | 
Details
   web
Records
Author Fernando Vilariño
Title Computer Vision and Performing Arts Type Conference Article
Year 2015 Publication Korean Scholars of Marketing Science Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract
Address Seoul; Korea; October 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference KAMS
Notes MV;SIAI Approved no
Call Number Admin @ si @Vil2015 Serial 2799
Permanent link to this record
 

 
Author Fernando Vilariño; Dan Norton; Onur Ferhat
Title Memory Fields: DJs in the Library Type Conference Article
Year 2015 Publication 21 st Symposium of Electronic Arts Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract
Address Vancouver; Canada; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ISEA
Notes ;SIAI Approved no
Call Number Admin @ si @VNF2015 Serial 2800
Permanent link to this record
 

 
Author Fernando Vilariño; Dan Norton; Onur Ferhat
Title The Eye Doesn't Click – Eyetracking and Digital Content Interaction Type Conference Article
Year 2016 Publication 4S/EASST Conference Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract
Address Barcelona; Spain; September 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference EASST
Notes MV; 600.097;SIAI Approved no
Call Number Admin @ si @VNF2016 Serial 2801
Permanent link to this record
 

 
Author Fernando Vilariño
Title Giving Value to digital collections in the Public Library Type Conference Article
Year 2016 Publication Librarian 2020 Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract
Address Brussels; Belgium; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference LIB
Notes MV; 600.097;SIAI Approved no
Call Number Admin @ si @Vil2016a Serial 2802
Permanent link to this record
 

 
Author Fernando Vilariño; Dimosthenis Karatzas
Title A Living Lab approach for Citizen Science in Libraries Type Conference Article
Year 2016 Publication 1st International ECSA Conference Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract
Address Berlin; Germany; May 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECSA
Notes MV; DAG; 600.084; 600.097;SIAI Approved no
Call Number Admin @ si @ViK2016 Serial 2804
Permanent link to this record
 

 
Author Fernando Vilariño
Title Dissemination, creation and education from archives: Case study of the collection of Digitized Visual Poems from Joan Brossa Foundation Type Conference Article
Year 2016 Publication International Workshop on Poetry: Archives, Poetries and Receptions Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract
Address Barcelona; Spain; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference POETRY
Notes MV; 600.097;SIAI Approved no
Call Number Admin @ si @Vil2016b Serial 2805
Permanent link to this record
 

 
Author Victor Ponce
Title Evolutionary Bags of Space-Time Features for Human Analysis Type Book Whole
Year 2016 Publication PhD Thesis Universitat de Barcelona, UOC and CVC Abbreviated Journal
Volume Issue Pages (down)
Keywords Computer algorithms; Digital image processing; Digital video; Analysis of variance; Dynamic programming; Evolutionary computation; Gesture
Abstract The representation (or feature) learning has been an emerging concept in the last years, since it collects a set of techniques that are present in any theoretical or practical methodology referring to artificial intelligence. In computer vision, a very common representation has adopted the form of the well-known Bag of Visual Words. This representation appears implicitly in most approaches where images are described, and is also present in a huge number of areas and domains: image content retrieval, pedestrian detection, human-computer interaction, surveillance, e-health, and social computing, amongst others. The early stages of this dissertation provide an approach for learning visual representations inside evolutionary algorithms, which consists of evolving weighting schemes to improve the BoVW representations for the task of recognizing categories of videos and images. Thus, we demonstrate the applicability of the most common weighting schemes, which are often used in text mining but are less frequently found in computer vision tasks. Beyond learning these visual representations, we provide an approach based on fusion strategies for learning spatiotemporal representations, from multimodal data obtained by depth sensors. Besides, we specially aim at the evolutionary and dynamic modelling, where the temporal factor is present in the nature of the data, such as video sequences of gestures and actions. Indeed, we explore the effects of probabilistic modelling for those approaches based on dynamic programming, so as to handle the temporal deformation and variance amongst video sequences of different categories. Finally, we integrate dynamic programming and generative models into an evolutionary computation framework, with the aim of learning Bags of SubGestures (BoSG) representations and hence to improve the generalization capability of standard gesture recognition approaches. The results obtained in the experimentation demonstrate, first, that evolutionary algorithms are useful for improving the representation of BoVW approaches in several datasets for recognizing categories in still images and video sequences. On the other hand, our experimentation reveals that both, the use of dynamic programming and generative models to align video sequences, and the representations obtained from applying fusion strategies in multimodal data, entail an enhancement on the performance when recognizing some gesture categories. Furthermore, the combination of evolutionary algorithms with models based on dynamic programming and generative approaches results, when aiming at the classification of video categories on large video datasets, in a considerable improvement over standard gesture and action recognition approaches. Finally, we demonstrate the applications of these representations in several domains for human analysis: classification of images where humans may be present, action and gesture recognition for general applications, and in particular for conversational settings within the field of restorative justice
Address June 2016
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Sergio Escalera;Xavier Baro;Hugo Jair Escalante
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA Approved no
Call Number Pon2016 Serial 2814
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; F. Aguilera; Angel Sappa; C. Aguilera; Ricardo Toledo
Title Learning cross-spectral similarity measures with deep convolutional neural networks Type Conference Article
Year 2016 Publication 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.
Address Las vegas; USA; June 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes ADAS; 600.086; 600.076 Approved no
Call Number Admin @ si @AAS2016 Serial 2809
Permanent link to this record
 

 
Author Daniel Hernandez; Lukas Schneider; Antonio Espinosa; David Vazquez; Antonio Lopez; Uwe Franke; Marc Pollefeys; Juan C. Moure
Title Slanted Stixels: Representing San Francisco's Steepest Streets} Type Conference Article
Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract In this work we present a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced that uses an extremely efficient over-segmentation. In doing so, the computational complexity of the Stixel inference algorithm is reduced significantly, achieving real-time computation capabilities with only a slight drop in accuracy. We evaluate the proposed approach in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.
Address London; uk; September 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes ADAS; 600.118 Approved no
Call Number ADAS @ adas @ HSE2017a Serial 2945
Permanent link to this record
 

 
Author Ozan Caglayan; Walid Aransa; Adrien Bardet; Mercedes Garcia-Martinez; Fethi Bougares; Loic Barrault; Marc Masana; Luis Herranz; Joost Van de Weijer
Title LIUM-CVC Submissions for WMT17 Multimodal Translation Task Type Conference Article
Year 2017 Publication 2nd Conference on Machine Translation Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation. We mainly explored two multimodal architectures where either global visual features or convolutional feature maps are integrated in order to benefit from visual context. Our final systems ranked first for both En-De and En-Fr language pairs according to the automatic evaluation metrics METEOR and BLEU.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WMT
Notes LAMP; 600.106; 600.120 Approved no
Call Number Admin @ si @ CAB2017 Serial 3035
Permanent link to this record
 

 
Author Ishaan Gulrajani; Kundan Kumar; Faruk Ahmed; Adrien Ali Taiga; Francesco Visin; David Vazquez; Aaron Courville
Title PixelVAE: A Latent Variable Model for Natural Images Type Conference Article
Year 2017 Publication 5th International Conference on Learning Representations Abbreviated Journal
Volume Issue Pages (down)
Keywords Deep Learning; Unsupervised Learning
Abstract Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and generate samples that preserve global structure but tend to suffer from image blurriness. PixelCNNs model sharp contours and details very well, but lack an explicit latent representation and have difficulty modeling large-scale structure in a computationally efficient way. In this paper, we present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. The resulting architecture achieves state-of-the-art log-likelihood on binarized MNIST. We extend PixelVAE to a hierarchy of multiple latent variables at different scales; this hierarchical model achieves competitive likelihood on 64x64 ImageNet and generates high-quality samples on LSUN bedrooms.
Address Toulon; France; April 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICLR
Notes ADAS; 600.085; 600.076; 601.281; 600.118 Approved no
Call Number ADAS @ adas @ GKA2017 Serial 2815
Permanent link to this record
 

 
Author Victor Ponce; Baiyu Chen; Marc Oliu; Ciprian Corneanu; Albert Clapes; Isabelle Guyon; Xavier Baro; Hugo Jair Escalante; Sergio Escalera
Title ChaLearn LAP 2016: First Round Challenge on First Impressions – Dataset and Results Type Conference Article
Year 2016 Publication 14th European Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages (down)
Keywords Behavior Analysis; Personality Traits; First Impressions
Abstract This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the rst round of the competition. The goal of the competition was to automatically evaluate ve \apparent“ personality traits (the so-called \Big Five”) from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by tting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source
platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the nal phase. Despite the diculty of the task, the teams made great advances in this round of the challenge.
Address Amsterdam; The Netherlands; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCVW
Notes HuPBA;MV; 600.063 Approved no
Call Number Admin @ si @ PCP2016 Serial 2828
Permanent link to this record
 

 
Author Jose A. Garcia; David Masip; Valerio Sbragaglia; Jacopo Aguzzi
Title Automated Identification and Tracking of Nephrops norvegicus (L.) Using Infrared and Monochromatic Blue Light Type Conference Article
Year 2016 Publication 19th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume Issue Pages (down)
Keywords computer vision; video analysis; object recognition; tracking; behaviour; social; decapod; Nephrops norvegicus
Abstract Automated video and image analysis can be a very efficient tool to analyze
animal behavior based on sociality, especially in hard access environments
for researchers. The understanding of this social behavior can play a key role in the sustainable design of capture policies of many species. This paper proposes the use of computer vision algorithms to identify and track a specific specie, the Norway lobster, Nephrops norvegicus, a burrowing decapod with relevant commercial value which is captured by trawling. These animals can only be captured when are engaged in seabed excursions, which are strongly related with their social behavior.
This emergent behavior is modulated by the day-night cycle, but their social
interactions remain unknown to the scientific community. The paper introduces an identification scheme made of four distinguishable black and white tags (geometric shapes). The project has recorded 15-day experiments in laboratory pools, under monochromatic blue light (472 nm.) and darkness conditions (recorded using Infra Red light). Using this massive image set, we propose a comparative of state-ofthe-art computer vision algorithms to distinguish and track the different animals’ movements. We evaluate the robustness to the high noise presence in the infrared video signals and free out-of-plane rotations due to animal movement. The experiments show promising accuracies under a cross-validation protocol, being adaptable to the automation and analysis of large scale data. In a second contribution, we created an extensive dataset of shapes (46027 different shapes) from four daily experimental video recordings, which will be available to the community.
Address Barcelona; Spain; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes OR;MV; Approved no
Call Number Admin @ si @ GMS2016 Serial 2816
Permanent link to this record
 

 
Author Jose A. Garcia; David Masip; Valerio Sbragaglia; Jacopo Aguzzi
Title Using ORB, BoW and SVM to identificate and track tagged Norway lobster Nephrops Norvegicus (L.) Type Conference Article
Year 2016 Publication 3rd International Conference on Maritime Technology and Engineering Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract Sustainable capture policies of many species strongly depend on the understanding of their social behaviour. Nevertheless, the analysis of emergent behaviour in marine species poses several challenges. Usually animals are captured and observed in tanks, and their behaviour is inferred from their dynamics and interactions. Therefore, researchers must deal with thousands of hours of video data. Without loss of generality, this paper proposes a computer
vision approach to identify and track specific species, the Norway lobster, Nephrops norvegicus. We propose an identification scheme were animals are marked using black and white tags with a geometric shape in the center (holed
triangle, filled triangle, holed circle and filled circle). Using a massive labelled dataset; we extract local features based on the ORB descriptor. These features are a posteriori clustered, and we construct a Bag of Visual Words feature vector per animal. This approximation yields us invariance to rotation
and translation. A SVM classifier achieves generalization results above 99%. In a second contribution, we will make the code and training data publically available.
Address Lisboa; Portugal; July 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MARTECH
Notes OR;MV; Approved no
Call Number Admin @ si @ GMS2016b Serial 2817
Permanent link to this record
 

 
Author Vassileios Balntas; Edgar Riba; Daniel Ponsa; Krystian Mikolajczyk
Title Learning local feature descriptors with triplets and shallow convolutional neural networks Type Conference Article
Year 2016 Publication 27th British Machine Vision Conference Abbreviated Journal
Volume Issue Pages (down)
Keywords
Abstract It has recently been demonstrated that local feature descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Previous work on learning such descriptors has focused on exploiting pairs of positive and negative patches to learn discriminative CNN representations. In this work, we propose to utilize triplets of training samples, together with in-triplet mining of hard negatives.
We show that our method achieves state of the art results, without the computational overhead typically associated with mining of negatives and with lower complexity of the network architecture. We compare our approach to recently introduced convolutional local feature descriptors, and demonstrate the advantages of the proposed methods in terms of performance and speed. We also examine different loss functions associated with triplets.
Address York; UK; September 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes ADAS; 600.086 Approved no
Call Number Admin @ si @ BRP2016 Serial 2818
Permanent link to this record