|   | 
Details
   web
Records
Author (up) Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera
Title Action Recognition by Pairwise Proximity Function Support Vector Machines with Dynamic Time Warping Kernels Type Conference Article
Year 2016 Publication 29th Canadian Conference on Artificial Intelligence Abbreviated Journal
Volume 9673 Issue Pages 3-14
Keywords
Abstract In the context of human action recognition using skeleton data, the 3D trajectories of joint points may be considered as multi-dimensional time series. The traditional recognition technique in the literature is based on time series dis(similarity) measures (such as Dynamic Time Warping). For these general dis(similarity) measures, k-nearest neighbor algorithms are a natural choice. However, k-NN classifiers are known to be sensitive to noise and outliers. In this paper, a new class of Support Vector Machine that is applicable to trajectory classification, such as action recognition, is developed by incorporating an efficient time-series distances measure into the kernel function. More specifically, the derivative of Dynamic Time Warping (DTW) distance measure is employed as the SVM kernel. In addition, the pairwise proximity learning strategy is utilized in order to make use of non-positive semi-definite (PSD) kernels in the SVM formulation. The recognition results of the proposed technique on two action recognition datasets demonstrates the ourperformance of our methodology compared to the state-of-the-art methods. Remarkably, we obtained 89 % accuracy on the well-known MSRAction3D dataset using only 3D trajectories of body joints obtained by Kinect
Address Victoria; Canada; May 2016
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AI
Notes HuPBA;MILAB; Approved no
Call Number Admin @ si @ BGE2016b Serial 2770
Permanent link to this record
 

 
Author (up) Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera
Title Support Vector Machines with Time Series Distance Kernels for Action Classification Type Conference Article
Year 2016 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal
Volume Issue Pages 1-7
Keywords
Abstract Despite the outperformance of Support Vector Machine (SVM) on many practical classification problems, the algorithm is not directly applicable to multi-dimensional trajectories having different lengths. In this paper, a new class of SVM that is applicable to trajectory classification, such as action recognition, is developed by incorporating two efficient time-series distances measures into the kernel function.
Dynamic Time Warping and Longest Common Subsequence distance measures along with their derivatives are
employed as the SVM kernel. In addition, the pairwise proximity learning strategy is utilized in order to make use of non-positive semi-definite kernels in the SVM formulation. The proposed method is employed for a challenging classification problem: action recognition by depth cameras using only skeleton data; and evaluated on three benchmark action datasets. Experimental results demonstrate the outperformance of our methodology compared to the state-ofthe-art on the considered datasets.
Address Lake Placid; NY (USA); March 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WACV
Notes HuPBA;MILAB; Approved no
Call Number Admin @ si @ BGE2016a Serial 2773
Permanent link to this record
 

 
Author (up) Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen
Title Combining Holistic and Part-based Deep Representations for Computational Painting Categorization Type Conference Article
Year 2016 Publication 6th International Conference on Multimedia Retrieval Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Automatic analysis of visual art, such as paintings, is a challenging inter-disciplinary research problem. Conventional approaches only rely on global scene characteristics by encoding holistic information for computational painting categorization.We argue that such approaches are sub-optimal and that discriminative common visual structures provide complementary information for painting classification. We present an approach that encodes both the global scene layout and discriminative latent common structures for computational painting categorization. The region of interests are automatically extracted, without any manual part labeling, by training class-specific deformable part-based models. Both holistic and region-of-interests are then described using multi-scale dense convolutional features. These features are pooled separately using Fisher vector encoding and concatenated afterwards in a single image representation. Experiments are performed on a challenging dataset with 91 different painters and 13 diverse painting styles. Our approach outperforms the standard method, which only employs the global scene characteristics. Furthermore, our method achieves state-of-the-art results outperforming a recent multi-scale deep features based approach [11] by 6.4% and 3.8% respectively on artist and style classification.
Address New York; USA; June 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICMR
Notes LAMP; 600.068; 600.079;ADAS Approved no
Call Number Admin @ si @ RKW2016 Serial 2763
Permanent link to this record
 

 
Author (up) Onur Ferhat; Fernando Vilariño
Title Low Cost Eye Tracking: The Current Panorama Type Journal Article
Year 2016 Publication Computational Intelligence and Neuroscience Abbreviated Journal CIN
Volume Issue Pages Article ID 8680541
Keywords
Abstract Despite the availability of accurate, commercial gaze tracker devices working with infrared (IR) technology, visible light gaze tracking constitutes an interesting alternative by allowing scalability and removing hardware requirements. Over the last years, this field has seen examples of research showing performance comparable to the IR alternatives. In this work, we survey the previous work on remote, visible light gaze trackers and analyze the explored techniques from various perspectives such as calibration strategies, head pose invariance, and gaze estimation techniques. We also provide information on related aspects of research such as public datasets to test against, open source projects to build upon, and gaze tracking services to directly use in applications. With all this information, we aim to provide the contemporary and future researchers with a map detailing previously explored ideas and the required tools.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV; 605.103; 600.047; 600.097;SIAI Approved no
Call Number Admin @ si @ FeV2016 Serial 2744
Permanent link to this record
 

 
Author (up) Oriol Vicente; Alicia Fornes; Ramon Valdes
Title The Digital Humanities Network of the UABCie: a smart structure of research and social transference for the digital humanities Type Conference Article
Year 2016 Publication Digital Humanities Centres: Experiences and Perspectives Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Warsaw; Poland; December 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DHLABS
Notes DAG; 600.097 Approved no
Call Number Admin @ si @ VFV2016 Serial 2908
Permanent link to this record
 

 
Author (up) Ozan Caglayan; Walid Aransa; Yaxing Wang; Marc Masana; Mercedes Garcıa-Martinez; Fethi Bougares; Loic Barrault; Joost Van de Weijer
Title Does Multimodality Help Human and Machine for Translation and Image Captioning? Type Conference Article
Year 2016 Publication 1st conference on machine translation Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
Address Berlin; Germany; August 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WMT
Notes LAMP; 600.106 ; 600.068 Approved no
Call Number Admin @ si @ CAW2016 Serial 2761
Permanent link to this record
 

 
Author (up) Pedro Herruzo; Marc Bolaños; Petia Radeva
Title Can a CNN Recognize Catalan Diet? Type Book Chapter
Year 2016 Publication AIP Conference Proceedings Abbreviated Journal
Volume 1773 Issue Pages
Keywords
Abstract CoRR abs/1607.08811
Nowadays, we can find several diseases related to the unhealthy diet habits of the population, such as diabetes, obesity, anemia, bulimia and anorexia. In many cases, these diseases are related to the food consumption of people. Mediterranean diet is scientifically known as a healthy diet that helps to prevent many metabolic diseases. In particular, our work focuses on the recognition of Mediterranean food and dishes. The development of this methodology would allow to analise the daily habits of users with wearable cameras, within the topic of lifelogging. By using automatic mechanisms we could build an objective tool for the analysis of the patient’s behavior, allowing specialists to discover unhealthy food patterns and understand the user’s lifestyle.
With the aim to automatically recognize a complete diet, we introduce a challenging multi-labeled dataset related to Mediter-ranean diet called FoodCAT. The first type of label provided consists of 115 food classes with an average of 400 images per dish, and the second one consists of 12 food categories with an average of 3800 pictures per class. This dataset will serve as a basis for the development of automatic diet recognition. In this context, deep learning and more specifically, Convolutional Neural Networks (CNNs), currently are state-of-the-art methods for automatic food recognition. In our work, we compare several architectures for image classification, with the purpose of diet recognition. Applying the best model for recognising food categories, we achieve a top-1 accuracy of 72.29%, and top-5 of 97.07%. In a complete diet recognition of dishes from Mediterranean diet, enlarged with the Food-101 dataset for international dishes recognition, we achieve a top-1 accuracy of 68.07%, and top-5 of 89.53%, for a total of 115+101 food classes.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ HBR2016 Serial 2837
Permanent link to this record
 

 
Author (up) Pedro Martins; Paulo Carvalho; Carlo Gatta
Title On the completeness of feature-driven maximally stable extremal regions Type Journal Article
Year 2016 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 74 Issue Pages 9-16
Keywords Local features; Completeness; Maximally Stable Extremal Regions
Abstract By definition, local image features provide a compact representation of the image in which most of the image information is preserved. This capability offered by local features has been overlooked, despite being relevant in many application scenarios. In this paper, we analyze and discuss the performance of feature-driven Maximally Stable Extremal Regions (MSER) in terms of the coverage of informative image parts (completeness). This type of features results from an MSER extraction on saliency maps in which features related to objects boundaries or even symmetry axes are highlighted. These maps are intended to be suitable domains for MSER detection, allowing this detector to provide a better coverage of informative image parts. Our experimental results, which were based on a large-scale evaluation, show that feature-driven MSER have relatively high completeness values and provide more complete sets than a traditional MSER detection even when sets of similar cardinality are considered.
Address
Corporate Author Thesis
Publisher Elsevier B.V. Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0167-8655 ISBN Medium
Area Expedition Conference
Notes LAMP;MILAB; Approved no
Call Number Admin @ si @ MCG2016 Serial 2748
Permanent link to this record
 

 
Author (up) Pejman Rasti; Salma Samiei; Mary Agoyi; Sergio Escalera; Gholamreza Anbarjafari
Title Robust non-blind color video watermarking using QR decomposition and entropy analysis Type Journal Article
Year 2016 Publication Journal of Visual Communication and Image Representation Abbreviated Journal JVCIR
Volume 38 Issue Pages 838-847
Keywords Video watermarking; QR decomposition; Discrete Wavelet Transformation; Chirp Z-transform; Singular value decomposition; Orthogonal–triangular decomposition
Abstract Issues such as content identification, document and image security, audience measurement, ownership and copyright among others can be settled by the use of digital watermarking. Many recent video watermarking methods show drops in visual quality of the sequences. The present work addresses the aforementioned issue by introducing a robust and imperceptible non-blind color video frame watermarking algorithm. The method divides frames into moving and non-moving parts. The non-moving part of each color channel is processed separately using a block-based watermarking scheme. Blocks with an entropy lower than the average entropy of all blocks are subject to a further process for embedding the watermark image. Finally a watermarked frame is generated by adding moving parts to it. Several signal processing attacks are applied to each watermarked frame in order to perform experiments and are compared with some recent algorithms. Experimental results show that the proposed scheme is imperceptible and robust against common signal processing attacks.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB; Approved no
Call Number Admin @ si @RSA2016 Serial 2766
Permanent link to this record
 

 
Author (up) Pejman Rasti; Tonis Uiboupin; Sergio Escalera; Gholamreza Anbarjafari
Title Convolutional Neural Network Super Resolution for Face Recognition in Surveillance Monitoring Type Conference Article
Year 2016 Publication 9th Conference on Articulated Motion and Deformable Objects Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Palma de Mallorca; Spain; July 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AMDO
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ RUE2016 Serial 2846
Permanent link to this record
 

 
Author (up) Petia Radeva
Title Can Deep Learning and Egocentric Vision for Visual Lifelogging Help Us Eat Better? Type Conference Article
Year 2016 Publication 19th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 4 Issue Pages
Keywords
Abstract
Address Barcelona; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes MILAB Approved no
Call Number Admin @ si @ Rad2016 Serial 2832
Permanent link to this record
 

 
Author (up) Q. Bao; Marçal Rusiñol; M.Coustaty; Muhammad Muzzamil Luqman; C.D. Tran; Jean-Marc Ogier
Title Delaunay triangulation-based features for Camera-based document image retrieval system Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 1-6
Keywords Camera-based Document Image Retrieval; Delaunay Triangulation; Feature descriptors; Indexing
Abstract In this paper, we propose a new feature vector, named DElaunay TRIangulation-based Features (DETRIF), for real-time camera-based document image retrieval. DETRIF is computed based on the geometrical constraints from each pair of adjacency triangles in delaunay triangulation which is constructed from centroids of connected components. Besides, we employ a hashing-based indexing system in order to evaluate the performance of DETRIF and to compare it with other systems such as LLAH and SRIF. The experimentation is carried out on two datasets comprising of 400 heterogeneous-content complex linguistic map images (huge size, 9800 X 11768 pixels resolution)and 700 textual document images.
Address Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.061; 600.084; 600.077 Approved no
Call Number Admin @ si @ BRC2016 Serial 2757
Permanent link to this record
 

 
Author (up) Saad Minhas; Aura Hernandez-Sabate; Shoaib Ehsan; Katerine Diaz; Ales Leonardis; Antonio Lopez; Klaus McDonald Maier
Title LEE: A photorealistic Virtual Environment for Assessing Driver-Vehicle Interactions in Self-Driving Mode Type Conference Article
Year 2016 Publication 14th European Conference on Computer Vision Workshops Abbreviated Journal
Volume 9915 Issue Pages 894-900
Keywords Simulation environment; Automated Driving; Driver-Vehicle interaction
Abstract Photorealistic virtual environments are crucial for developing and testing automated driving systems in a safe way during trials. As commercially available simulators are expensive and bulky, this paper presents a low-cost, extendable, and easy-to-use (LEE) virtual environment with the aim to highlight its utility for level 3 driving automation. In particular, an experiment is performed using the presented simulator to explore the influence of different variables regarding control transfer of the car after the system was driving autonomously in a highway scenario. The results show that the speed of the car at the time when the system needs to transfer the control to the human driver is critical.
Address Amsterdam; The Netherlands; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCVW
Notes ADAS;IAM; 600.085; 600.076 Approved no
Call Number MHE2016 Serial 2865
Permanent link to this record
 

 
Author (up) Santiago Segui; Michal Drozdzal; Guillem Pascual; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria
Title Generic Feature Learning for Wireless Capsule Endoscopy Analysis Type Journal Article
Year 2016 Publication Computers in Biology and Medicine Abbreviated Journal CBM
Volume 79 Issue Pages 163-172
Keywords Wireless capsule endoscopy; Deep learning; Feature learning; Motility analysis
Abstract The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR; MILAB;MV; Approved no
Call Number Admin @ si @ SDP2016 Serial 2836
Permanent link to this record
 

 
Author (up) Sergio Escalera; Jordi Gonzalez; Xavier Baro; Fernando Alonso; Martha Mackay
Title Care Respite: a remote monitoring eHealth system for improving ambient assisted living Type Conference Article
Year 2016 Publication Human Motion Analysis for Healthcare Applications Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Advances in technology that capture human motion have been quite remarkable during the last five years. New sensors have been developed, such as the Microsoft Kinect, Asus Xtion Pro live, PrimeSense Carmine and Leap Motion. Their main advantages are their non-intrusive nature, low cost and widely available support for developers offered by large corporations or Open Communities. Although they were originally developed for computer games, they have inspired numerous healthcare related ideas and projects in areas such as Medical Disorder Diagnosis, Assisted Living, Rehabilitation and Surgery.

In Assisted Living, human motion analysis allows continuous monitoring of elderly and vulnerable people and their activities to potentially detect life-threatening events such as falls. Human motion analysis in rehabilitation provides the opportunity for motivating patients through gamification, evaluating prescribed programmes of exercises and assessing patients’ progress. In operating theatres, surgeons may use a gesture-based interface to access medical information or control a tele-surgery system. Human motion analysis may also be used to diagnose a range of mental and physical diseases and conditions.

This event will discuss recent advances in human motion sensing and provide an application to healthcare for networking and exploring potential synergies and collaborations.
Address Savoy Place; London; uk; May 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference HMAHA
Notes HuPBA; ISE; Approved no
Call Number Admin @ si @ EGB2016 Serial 2852
Permanent link to this record