|   | 
Details
   web
Records
Author Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Hatem A. Rashwan; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig
Title MACNet: Multi-scale Atrous Convolution Networks for Food Places Classification in Egocentric Photo-streams Type Conference Article
Year 2018 Publication (up) European Conference on Computer Vision workshops Abbreviated Journal
Volume Issue Pages 423-433
Keywords
Abstract First-person (wearable) camera continually captures unscripted interactions of the camera user with objects, people, and scenes reflecting his personal and relational tendencies. One of the preferences of people is their interaction with food events. The regulation of food intake and its duration has a great importance to protect against diseases. Consequently, this work aims to develop a smart model that is able to determine the recurrences of a person on food places during a day. This model is based on a deep end-to-end model for automatic food places recognition by analyzing egocentric photo-streams. In this paper, we apply multi-scale Atrous convolution networks to extract the key features related to food places of the input images. The proposed model is evaluated on an in-house private dataset called “EgoFoodPlaces”. Experimental results shows promising results of food places classification recognition in egocentric photo-streams.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LCNS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCVW
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ SRR2018b Serial 3185
Permanent link to this record
 

 
Author F.Negin; Pau Rodriguez; M.Koperski; A.Kerboua; Jordi Gonzalez; J.Bourgeois; E.Chapoulie; P.Robert; F.Bremond
Title PRAXIS: Towards automatic cognitive assessment using gesture recognition Type Journal Article
Year 2018 Publication (up) Expert Systems with Applications Abbreviated Journal ESWA
Volume 106 Issue Pages 21-35
Keywords
Abstract Praxis test is a gesture-based diagnostic test which has been accepted as diagnostically indicative of cortical pathologies such as Alzheimer’s disease. Despite being simple, this test is oftentimes skipped by the clinicians. In this paper, we propose a novel framework to investigate the potential of static and dynamic upper-body gestures based on the Praxis test and their potential in a medical framework to automatize the test procedures for computer-assisted cognitive assessment of older adults.

In order to carry out gesture recognition as well as correctness assessment of the performances we have recollected a novel challenging RGB-D gesture video dataset recorded by Kinect v2, which contains 29 specific gestures suggested by clinicians and recorded from both experts and patients performing the gesture set. Moreover, we propose a framework to learn the dynamics of upper-body gestures, considering the videos as sequences of short-term clips of gestures. Our approach first uses body part detection to extract image patches surrounding the hands and then, by means of a fine-tuned convolutional neural network (CNN) model, it learns deep hand features which are then linked to a long short-term memory to capture the temporal dependencies between video frames.
We report the results of four developed methods using different modalities. The experiments show effectiveness of our deep learning based approach in gesture recognition and performance assessment tasks. Satisfaction of clinicians from the assessment reports indicates the impact of framework corresponding to the diagnosis.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ NRK2018 Serial 3669
Permanent link to this record
 

 
Author Alicia Fornes; Bart Lamiroy
Title Graphics Recognition, Current Trends and Evolutions Type Book Whole
Year 2018 Publication (up) Graphics Recognition, Current Trends and Evolutions Abbreviated Journal
Volume 11009 Issue Pages
Keywords
Abstract This book constitutes the thoroughly refereed post-conference proceedings of the 12th International Workshop on Graphics Recognition, GREC 2017, held in Kyoto, Japan, in November 2017.
The 10 revised full papers presented were carefully reviewed and selected from 14 initial submissions. They contain both classical and emerging topics of graphics rcognition, namely analysis and detection of diagrams, search and classification, optical music recognition, interpretation of engineering drawings and maps.
Address
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-030-02283-9 Medium
Area Expedition Conference
Notes DAG; 600.121 Approved no
Call Number Admin @ si @ FoL2018 Serial 3171
Permanent link to this record
 

 
Author Arnau Baro; Pau Riba; Jorge Calvo-Zaragoza; Alicia Fornes
Title Optical Music Recognition by Long Short-Term Memory Networks Type Book Chapter
Year 2018 Publication (up) Graphics Recognition. Current Trends and Evolutions Abbreviated Journal
Volume 11009 Issue Pages 81-95
Keywords Optical Music Recognition; Recurrent Neural Network; Long ShortTerm Memory
Abstract Optical Music Recognition refers to the task of transcribing the image of a music score into a machine-readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level. The experimental results are promising, showing the benefits of our approach.
Address
Corporate Author Thesis
Publisher Springer Place of Publication Editor A. Fornes, B. Lamiroy
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-030-02283-9 Medium
Area Expedition Conference GREC
Notes DAG; 600.097; 601.302; 601.330; 600.121 Approved no
Call Number Admin @ si @ BRC2018 Serial 3227
Permanent link to this record
 

 
Author Esmitt Ramirez; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil
Title BronchoX: bronchoscopy exploration software for biopsy intervention planning Type Journal
Year 2018 Publication (up) Healthcare Technology Letters Abbreviated Journal HTL
Volume 5 Issue 5 Pages 177–182
Keywords
Abstract Virtual bronchoscopy (VB) is a non-invasive exploration tool for intervention planning and navigation of possible pulmonary lesions (PLs). A VB software involves the location of a PL and the calculation of a route, starting from the trachea, to reach it. The selection of a VB software might be a complex process, and there is no consensus in the community of medical software developers in which is the best-suited system to use or framework to choose. The authors present Bronchoscopy Exploration (BronchoX), a VB software to plan biopsy interventions that generate physician-readable instructions to reach the PLs. The authors’ solution is open source, multiplatform, and extensible for future functionalities, designed by their multidisciplinary research and development group. BronchoX is a compound of different algorithms for segmentation, visualisation, and navigation of the respiratory tract. Performed results are a focus on the test the effectiveness of their proposal as an exploration software, also to measure its accuracy as a guiding system to reach PLs. Then, 40 different virtual planning paths were created to guide physicians until distal bronchioles. These results provide a functional software for BronchoX and demonstrate how following simple instructions is possible to reach distal lesions from the trachea.
Address
Corporate Author rank (SJR) Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.096; 600.075; 601.323; 601.337; 600.145 Approved no
Call Number Admin @ si @ RSB2018a Serial 3132
Permanent link to this record
 

 
Author Rain Eric Haamer; Eka Rusadze; Iiris Lusi; Tauseef Ahmed; Sergio Escalera; Gholamreza Anbarjafari
Title Review on Emotion Recognition Databases Type Book Chapter
Year 2018 Publication (up) Human-Robot Interaction: Theory and Application Abbreviated Journal
Volume Issue Pages
Keywords emotion; computer vision; databases
Abstract Over the past few decades human-computer interaction has become more important in our daily lives and research has developed in many directions: memory research, depression detection, and behavioural deficiency detection, lie detection, (hidden) emotion recognition etc. Because of that, the number of generic emotion and face databases or those tailored to specific needs have grown immensely large. Thus, a comprehensive yet compact guide is needed to help researchers find the most suitable database and understand what types of databases already exist. In this paper, different elicitation methods are discussed and the databases are primarily organized into neat and informative tables based on the format.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-78923-316-2 Medium
Area Expedition Conference
Notes HUPBA; 602.133 Approved no
Call Number Admin @ si @ HRL2018 Serial 3212
Permanent link to this record
 

 
Author Gholamreza Anbarjafari; Sergio Escalera
Title Human-Robot Interaction: Theory and Application Type Book Whole
Year 2018 Publication (up) Human-Robot Interaction: Theory and Application Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-78923-316-2 Medium
Area Expedition Conference
Notes HUPBA Approved no
Call Number Admin @ si @ AnE2018 Serial 3216
Permanent link to this record
 

 
Author Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Debora Gil
Title Continuous head pose estimation using manifold subspace embedding and multivariate regression Type Journal Article
Year 2018 Publication (up) IEEE Access Abbreviated Journal ACCESS
Volume 6 Issue Pages 18325 - 18334
Keywords Head Pose estimation; HOG features; Generalized Discriminative Common Vectors; B-splines; Multiple linear regression
Abstract In this paper, a continuous head pose estimation system is proposed to estimate yaw and pitch head angles from raw facial images. Our approach is based on manifold learningbased methods, due to their promising generalization properties shown for face modelling from images. The method combines histograms of oriented gradients, generalized discriminative common vectors and continuous local regression to achieve successful performance. Our proposal was tested on multiple standard face datasets, as well as in a realistic scenario. Results show a considerable performance improvement and a higher consistence of our model in comparison with other state-of-art methods, with angular errors varying between 9 and 17 degrees.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2169-3536 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ DMH2018b Serial 3091
Permanent link to this record
 

 
Author Jianzhy Guo; Zhen Lei; Jun Wan; Egils Avots; Noushin Hajarolasvadi; Boris Knyazev; Artem Kuharenko; Julio C. S. Jacques Junior; Xavier Baro; Hasan Demirel; Sergio Escalera; Juri Allik; Gholamreza Anbarjafari
Title Dominant and Complementary Emotion Recognition from Still Images of Faces Type Journal Article
Year 2018 Publication (up) IEEE Access Abbreviated Journal ACCESS
Volume 6 Issue Pages 26391 - 26403
Keywords
Abstract Emotion recognition has a key role in affective computing. Recently, fine-grained emotion analysis, such as compound facial expression of emotions, has attracted high interest of researchers working on affective computing. A compound facial emotion includes dominant and complementary emotions (e.g., happily-disgusted and sadly-fearful), which is more detailed than the seven classical facial emotions (e.g., happy, disgust, and so on). Current studies on compound emotions are limited to use data sets with limited number of categories and unbalanced data distributions, with labels obtained automatically by machine learning-based algorithms which could lead to inaccuracies. To address these problems, we released the iCV-MEFED data set, which includes 50 classes of compound emotions and labels assessed by psychologists. The task is challenging due to high similarities of compound facial emotions from different categories. In addition, we have organized a challenge based on the proposed iCV-MEFED data set, held at FG workshop 2017. In this paper, we analyze the top three winner methods and perform further detailed experiments on the proposed data set. Experiments indicate that pairs of compound emotion (e.g., surprisingly-happy vs happily-surprised) are more difficult to be recognized if compared with the seven basic emotions. However, we hope the proposed data set can help to pave the way for further research on compound facial emotion recognition.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ GLW2018 Serial 3122
Permanent link to this record
 

 
Author Zhijie Fang; Antonio Lopez
Title Is the Pedestrian going to Cross? Answering by 2D Pose Estimation Type Conference Article
Year 2018 Publication (up) IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 1271 - 1276
Keywords
Abstract Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IV
Notes ADAS; 600.124; 600.116; 600.118 Approved no
Call Number Admin @ si @ FaL2018 Serial 3181
Permanent link to this record
 

 
Author Akhil Gurram; Onay Urfalioglu; Ibrahim Halfaoui; Fahd Bouzaraa; Antonio Lopez
Title Monocular Depth Estimation by Learning from Heterogeneous Datasets Type Conference Article
Year 2018 Publication (up) IEEE Intelligent Vehicles Symposium Abbreviated Journal
Volume Issue Pages 2176 - 2181
Keywords
Abstract Depth estimation provides essential information to perform autonomous driving and driver assistance. Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies as required by stereo-vision approaches. State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels, which usually are difficult to annotate (eg crowded urban images). Moreover, so far it is common practice to assume that the same raw training data is associated with both types of ground truth, ie, depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, ie, that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on Monocular Depth Estimation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IV
Notes ADAS; 600.124; 600.116; 600.118 Approved no
Call Number Admin @ si @ GUH2018 Serial 3183
Permanent link to this record
 

 
Author Felipe Codevilla; Matthias Muller; Antonio Lopez; Vladlen Koltun; Alexey Dosovitskiy
Title End-to-end Driving via Conditional Imitation Learning Type Conference Article
Year 2018 Publication (up) IEEE International Conference on Robotics and Automation Abbreviated Journal
Volume Issue Pages 4693 - 4700
Keywords
Abstract Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at this https URL
Address Brisbane; Australia; May 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICRA
Notes ADAS; 600.116; 600.124; 600.118 Approved no
Call Number Admin @ si @ CML2018 Serial 3108
Permanent link to this record
 

 
Author Jelena Gorbova; Egils Avots; Iiris Lusi; Mark Fishel; Sergio Escalera; Gholamreza Anbarjafari
Title Integrating Vision and Language for First Impression Personality Analysis Type Journal Article
Year 2018 Publication (up) IEEE Multimedia Abbreviated Journal MULTIMEDIA
Volume 25 Issue 2 Pages 24 - 33
Keywords
Abstract The authors present a novel methodology for analyzing integrated audiovisual signals and language to assess a persons personality. An evaluation of their proposed multimodal method using a job candidate screening system that predicted five personality traits from a short video demonstrates the methods effectiveness.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; 602.133 Approved no
Call Number Admin @ si @ GAL2018 Serial 3124
Permanent link to this record
 

 
Author Yagmur Gucluturk; Umut Guclu; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera; Marcel A. J. van Gerven; Rob van Lier
Title Multimodal First Impression Analysis with Deep Residual Networks Type Journal Article
Year 2018 Publication (up) IEEE Transactions on Affective Computing Abbreviated Journal TAC
Volume 8 Issue 3 Pages 316-329
Keywords
Abstract People form first impressions about the personalities of unfamiliar individuals even after very brief interactions with them. In this study we present and evaluate several models that mimic this automatic social behavior. Specifically, we present several models trained on a large dataset of short YouTube video blog posts for predicting apparent Big Five personality traits of people and whether they seem suitable to be recommended to a job interview. Along with presenting our audiovisual approach and results that won the third place in the ChaLearn First Impressions Challenge, we investigate modeling in different modalities including audio only, visual only, language only, audiovisual, and combination of audiovisual and language. Our results demonstrate that the best performance could be obtained using a fusion of all data modalities. Finally, in order to promote explainability in machine learning and to provide an example for the upcoming ChaLearn challenges, we present a simple approach for explaining the predictions for job interview recommendations
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ GGB2018 Serial 3210
Permanent link to this record
 

 
Author Eduardo Aguilar; Beatriz Remeseiro; Marc Bolaños; Petia Radeva
Title Grab, Pay, and Eat: Semantic Food Detection for Smart Restaurants Type Journal Article
Year 2018 Publication (up) IEEE Transactions on Multimedia Abbreviated Journal
Volume 20 Issue 12 Pages 3266 - 3275
Keywords
Abstract The increase in awareness of people towards their nutritional habits has drawn considerable attention to the field of automatic food analysis. Focusing on self-service restaurants environment, automatic food analysis is not only useful for extracting nutritional information from foods selected by customers, it is also of high interest to speed up the service solving the bottleneck produced at the cashiers in times of high demand. In this paper, we address the problem of automatic food tray analysis in canteens and restaurants environment, which consists in predicting multiple foods placed on a tray image. We propose a new approach for food analysis based on convolutional neural networks, we name Semantic Food Detection, which integrates in the same framework food localization, recognition and segmentation. We demonstrate that our method improves the state of the art food detection by a considerable margin on the public dataset UNIMIB2016 achieving about 90% in terms of F-measure, and thus provides a significant technological advance towards the automatic billing in restaurant environments.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ ARB2018 Serial 3236
Permanent link to this record