|   | 
Details
   web
Records
Author Sergio Alloza; Flavio Escribano; Sergi Delgado; Ciprian Corneanu; Sergio Escalera
Title (down) XBadges. Identifying and training soft skills with commercial video games Improving persistence, risk taking & spatial reasoning with commercial video games and facial and emotional recognition system Type Conference Article
Year 2017 Publication 4th Congreso de la Sociedad Española para las Ciencias del Videojuego Abbreviated Journal
Volume 1957 Issue Pages 13-28
Keywords Video Games; Soft Skills; Training; Skilling Development; Emotions; Cognitive Abilities; Flappy Bird; Pacman; Tetris
Abstract XBadges is a research project based on the hypothesis that commercial video games (nonserious games) can train soft skills. We measure persistence, patial reasoning and risk taking before and after subjects paticipate in controlled game playing sessions.
In addition, we have developed an automatic facial expression recognition system capable of inferring their emotions while playing, allowing us to study the role of emotions in soft skills acquisition. We have used Flappy Bird, Pacman and Tetris for assessing changes in persistence, risk taking and spatial reasoning respectively.
Results show how playing Tetris significantly improves spatial reasoning and how playing Pacman significantly improves prudence in certain areas of behavior. As for emotions, they reveal that being concentrated helps to improve performance and skills acquisition. Frustration is also shown as a key element. With the results obtained we are able to glimpse multiple applications in areas which need soft skills development.
Address Barcelona; June 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference COSECIVI; CEUR-WS
Notes HUPBA; no menciona Approved no
Call Number Admin @ si @ AED2017 Serial 3065
Permanent link to this record
 

 
Author Andrei Polzounov; Artsiom Ablavatski; Sergio Escalera; Shijian Lu; Jianfei Cai
Title (down) WordFences: Text Localization and Recognition Type Conference Article
Year 2017 Publication 24th International Conference on Image Processing Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Beijing; China; September 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes HUPBA; no menciona Approved no
Call Number Admin @ si @ PAE2017 Serial 3007
Permanent link to this record
 

 
Author Lasse Martensson; Anders Hast; Alicia Fornes
Title (down) Word Spotting as a Tool for Scribal Attribution Type Conference Article
Year 2017 Publication 2nd Conference of the association of Digital Humanities in the Nordic Countries Abbreviated Journal
Volume Issue Pages 87-89
Keywords
Abstract
Address Gothenburg; Suecia; March 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-91-88348-83-8 Medium
Area Expedition Conference DHN
Notes DAG; 600.097; 600.121 Approved no
Call Number Admin @ si @ MHF2017 Serial 2954
Permanent link to this record
 

 
Author Yagmur Gucluturk; Umut Guclu; Marc Perez; Hugo Jair Escalante; Xavier Baro; Isabelle Guyon; Carlos Andujar; Julio C. S. Jacques Junior; Meysam Madadi; Sergio Escalera
Title (down) Visualizing Apparent Personality Analysis with Deep Residual Networks Type Conference Article
Year 2017 Publication Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV Abbreviated Journal
Volume Issue Pages 3101-3109
Keywords
Abstract Automatic prediction of personality traits is a subjective task that has recently received much attention. Specifically, automatic apparent personality trait prediction from multimodal data has emerged as a hot topic within the filed of computer vision and, more particularly, the so called “looking
at people” sub-field. Considering “apparent” personality traits as opposed to real ones considerably reduces the subjectivity of the task. The real world applications are encountered in a wide range of domains, including entertainment, health, human computer interaction, recruitment and security. Predictive models of personality traits are useful for individuals in many scenarios (e.g., preparing for job interviews, preparing for public speaking). However, these predictions in and of themselves might be deemed to be untrustworthy without human understandable supportive evidence. Through a series of experiments on a recently released benchmark dataset for automatic apparent personality trait prediction, this paper characterizes the audio and
visual information that is used by a state-of-the-art model while making its predictions, so as to provide such supportive evidence by explaining predictions made. Additionally, the paper describes a new web application, which gives feedback on apparent personality traits of its users by combining
model predictions with their explanations.
Address Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes HUPBA; 6002.143 Approved no
Call Number Admin @ si @ GGP2017 Serial 3067
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny
Title (down) Visual attention models for scene text recognition Type Conference Article
Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract arXiv:1706.01487
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.121 Approved no
Call Number Admin @ si @ GhV2017b Serial 3080
Permanent link to this record
 

 
Author David Geronimo; David Vazquez; Arturo de la Escalera
Title (down) Vision-Based Advanced Driver Assistance Systems Type Book Chapter
Year 2017 Publication Computer Vision in Vehicle Technology: Land, Sea, and Air Abbreviated Journal
Volume Issue Pages
Keywords ADAS; Autonomous Driving
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number ADAS @ adas @ GVE2017 Serial 2881
Permanent link to this record
 

 
Author Marc Bolaños; Alvaro Peris; Francisco Casacuberta; Petia Radeva
Title (down) VIBIKNet: Visual Bidirectional Kernelized Network for Visual Question Answering Type Conference Article
Year 2017 Publication 8th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume Issue Pages
Keywords Visual Qestion Aswering; Convolutional Neural Networks; Long short-term memory networks
Abstract In this paper, we address the problem of visual question answering by proposing a novel model, called VIBIKNet. Our model is based on integrating Kernelized Convolutional Neural Networks and Long-Short Term Memory units to generate an answer given a question about an image. We prove that VIBIKNet is an optimal trade-off between accuracy and computational load, in terms of memory and time consumption. We validate our method on the VQA challenge dataset and compare it to the top performing methods in order to illustrate its performance and speed.
Address Faro; Portugal; June 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IbPRIA
Notes MILAB; no proj Approved no
Call Number Admin @ si @ BPC2017 Serial 2939
Permanent link to this record
 

 
Author Fernando Vilariño; Dan Norton
Title (down) Using mutimedia tools to spread poetry collections Type Conference Article
Year 2017 Publication Internet librarian International Conference Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address London; UK; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ILI
Notes MV; 600.097;SIAI Approved no
Call Number Admin @ si @ ViN2017 Serial 3031
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Oliver Valero; Francesca Vidal; Zaida Sarrate
Title (down) Unraveling the enigmas of chromosome territoriality during spermatogenesis Type Conference Article
Year 2017 Publication IX Jornada del Departament de Biologia Cel•lular, Fisiologia i Immunologia Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address UAB; Barcelona; June 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.145 Approved no
Call Number Admin @ si @ SBG2017b Serial 2959
Permanent link to this record
 

 
Author Alicia Fornes; Beata Megyesi; Joan Mas
Title (down) Transcription of Encoded Manuscripts with Image Processing Techniques Type Conference Article
Year 2017 Publication Digital Humanities Conference Abbreviated Journal
Volume Issue Pages 441-443
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DH
Notes DAG; 600.097; 600.121 Approved no
Call Number Admin @ si @ FMM2017 Serial 3061
Permanent link to this record
 

 
Author Antonio Lopez; Gabriel Villalonga; Laura Sellart; German Ros; David Vazquez; Jiaolong Xu; Javier Marin; Azadeh S. Mozafari
Title (down) Training my car to see using virtual worlds Type Journal Article
Year 2017 Publication Image and Vision Computing Abbreviated Journal IMAVIS
Volume 38 Issue Pages 102-118
Keywords
Abstract Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ LVS2017 Serial 2985
Permanent link to this record
 

 
Author Pau Riba; Alicia Fornes; Josep Llados
Title (down) Towards the Alignment of Handwritten Music Scores Type Book Chapter
Year 2017 Publication International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges Abbreviated Journal
Volume 9657 Issue Pages 103-116
Keywords Optical Music Recognition; Handwritten Music Scores; Dynamic Time Warping alignment
Abstract It is very common to nd di erent versions of the same music work in archives of Opera Theaters. These di erences correspond to modi cations and annotations from the musicians. From the musicologist point of view, these variations are very interesting and deserve study.
This paper explores the alignment of music scores as a tool for automatically detecting the passages that contain such di erences. Given the diculties in the recognition of handwritten music scores, our goal is to align the music scores and at the same time, avoid the recognition of music elements as much as possible. After removing the sta lines, braces and ties, the bar lines are detected. Then, the bar units are described as a whole using the Blurred Shape Model. The bar units alignment is performed by using Dynamic Time Warping. The analysis of the alignment path is used to detect the variations in the music scores. The method has been evaluated on a subset of the CVC-MUSCIMA dataset, showing encouraging results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor Bart Lamiroy; R Dueire Lins
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-319-52158-9 Medium
Area Expedition Conference
Notes DAG; 600.097; 602.006; 600.121 Approved no
Call Number Admin @ si @ RFL2017 Serial 2955
Permanent link to this record
 

 
Author Marc Bolaños; Mariella Dimiccoli; Petia Radeva
Title (down) Towards Storytelling from Visual Lifelogging: An Overview Type Journal Article
Year 2017 Publication IEEE Transactions on Human-Machine Systems Abbreviated Journal THMS
Volume 47 Issue 1 Pages 77 - 90
Keywords
Abstract Visual lifelogging consists of acquiring images that capture the daily experiences of the user by wearing a camera over a long period of time. The pictures taken offer considerable potential for knowledge mining concerning how people live their lives, hence, they open up new opportunities for many potential applications in fields including healthcare, security, leisure and
the quantified self. However, automatically building a story from a huge collection of unstructured egocentric data presents major challenges. This paper provides a thorough review of advances made so far in egocentric data analysis, and in view of the current state of the art, indicates new lines of research to move us towards storytelling from visual lifelogging.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; 601.235 Approved no
Call Number Admin @ si @ BDR2017 Serial 2712
Permanent link to this record
 

 
Author Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Gloria Fernandez Esparrach; Xavier Gray; Olivier Romain; F. Javier Sanchez; Aymeric Histace
Title (down) Towards Real-Time Polyp Detection in Colonoscopy Videos: Adapting Still Frame-Based Methodologies for Video Sequences Analysis Type Conference Article
Year 2017 Publication 4th International Workshop on Computer Assisted and Robotic Endoscopy Abbreviated Journal
Volume Issue Pages 29-41
Keywords Polyp detection; colonoscopy; real time; spatio temporal coherence
Abstract Colorectal cancer is the second cause of cancer death in United States: precursor lesions (polyps) detection is key for patient survival. Though colonoscopy is the gold standard screening tool, some polyps are still missed. Several computational systems have been proposed but none of them are used in the clinical room mainly due to computational constraints. Besides, most of them are built over still frame databases, decreasing their performance on video analysis due to the lack of output stability and not coping with associated variability on image quality and polyp appearance. We propose a strategy to adapt these methods to video analysis by adding a spatio-temporal stability module and studying a combination of features to capture polyp appearance variability. We validate our strategy, incorporated on a real-time detection method, on a public video database. Resulting method detects all
polyps under real time constraints, increasing its performance due to our
adaptation strategy.
Address Quebec; Canada; September 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CARE
Notes MV; 600.096; 600.075 Approved no
Call Number Admin @ si @ ABS2017b Serial 2977
Permanent link to this record
 

 
Author Carles Sanchez; Antonio Esteban Lansaque; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil
Title (down) Towards a Videobronchoscopy Localization System from Airway Centre Tracking Type Conference Article
Year 2017 Publication 12th International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume Issue Pages 352-359
Keywords Video-bronchoscopy; Lung cancer diagnosis; Airway lumen detection; Region tracking; Guided bronchoscopy navigation
Abstract Bronchoscopists use fluoroscopy to guide flexible bronchoscopy to the lesion to be biopsied without any kind of incision. Being fluoroscopy an imaging technique based on X-rays, the risk of developmental problems and cancer is increased in those subjects exposed to its application, so minimizing radiation is crucial. Alternative guiding systems such as electromagnetic navigation require specific equipment, increase the cost of the clinical procedure and still require fluoroscopy. In this paper we propose an image based guiding system based on the extraction of airway centres from intra-operative videos. Such anatomical landmarks are matched to the airway centreline extracted from a pre-planned CT to indicate the best path to the nodule. We present a
feasibility study of our navigation system using simulated bronchoscopic videos and a multi-expert validation of landmarks extraction in 3 intra-operative ultrathin explorations.
Address Porto; Portugal; February 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes IAM; 600.096; 600.075; 600.145 Approved no
Call Number Admin @ si @ SEB2017 Serial 2943
Permanent link to this record