toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Alvaro Peris; Marc Bolaños; Petia Radeva; Francisco Casacuberta edit   pdf
openurl 
  Title Video Description Using Bidirectional Recurrent Neural Networks Type Conference Article
  Year 2016 Publication 25th International Conference on Artificial Neural Networks Abbreviated Journal  
  Volume 2 Issue Pages 3-11  
  Keywords Video description; Neural Machine Translation; Birectional Recurrent Neural Networks; LSTM; Convolutional Neural Networks  
  Abstract Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this model by introducing two contributions into the encoding stage. First, producing richer image representations by combining object and location information from Convolutional Neural Networks and second, introducing Bidirectional Recurrent Neural Networks for capturing both forward and backward temporal relationships in the input frames.  
  Address Barcelona; September 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference ICANN  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ PBR2016 Serial 2833  
Permanent link to this record
 

 
Author Marc Bolaños; Petia Radeva edit   pdf
url  doi
openurl 
  Title Simultaneous Food Localization and Recognition Type Conference Article
  Year 2016 Publication 23rd International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract CoRR abs/1604.07953
The development of automatic nutrition diaries, which would allow to keep track objectively of everything we eat, could enable a whole new world of possibilities for people concerned about their nutrition patterns. With this purpose, in this paper we propose the first method for simultaneous food localization and recognition. Our method is based on two main steps, which consist in, first, produce a food activation map on the input image (i.e. heat map of probabilities) for generating bounding boxes proposals and, second, recognize each of the food types or food-related objects present in each bounding box. We demonstrate that our proposal, compared to the most similar problem nowadays – object localization, is able to obtain high precision and reasonable recall levels with only a few bounding boxes. Furthermore, we show that it is applicable to both conventional and egocentric images.
 
  Address Cancun; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference ICPR  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ BoR2016 Serial 2834  
Permanent link to this record
 

 
Author Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit   pdf
url  doi
openurl 
  Title With Whom Do I Interact? Detecting Social Interactions in Egocentric Photo-streams Type Conference Article
  Year 2016 Publication 23rd International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams.  
  Address Cancun; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference ICPR  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ADR2016d Serial 2835  
Permanent link to this record
 

 
Author Santiago Segui; Michal Drozdzal; Guillem Pascual; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria edit   pdf
url  openurl
  Title Generic Feature Learning for Wireless Capsule Endoscopy Analysis Type Journal Article
  Year 2016 Publication Computers in Biology and Medicine Abbreviated Journal CBM  
  Volume 79 Issue Pages 163-172  
  Keywords Wireless capsule endoscopy; Deep learning; Feature learning; Motility analysis  
  Abstract The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference  
  Notes OR; MILAB;MV; Approved no  
  Call Number Admin @ si @ SDP2016 Serial 2836  
Permanent link to this record
 

 
Author Pedro Herruzo; Marc Bolaños; Petia Radeva edit   pdf
url  doi
openurl 
  Title Can a CNN Recognize Catalan Diet? Type Book Chapter
  Year 2016 Publication AIP Conference Proceedings Abbreviated Journal  
  Volume 1773 Issue Pages  
  Keywords  
  Abstract CoRR abs/1607.08811
Nowadays, we can find several diseases related to the unhealthy diet habits of the population, such as diabetes, obesity, anemia, bulimia and anorexia. In many cases, these diseases are related to the food consumption of people. Mediterranean diet is scientifically known as a healthy diet that helps to prevent many metabolic diseases. In particular, our work focuses on the recognition of Mediterranean food and dishes. The development of this methodology would allow to analise the daily habits of users with wearable cameras, within the topic of lifelogging. By using automatic mechanisms we could build an objective tool for the analysis of the patient’s behavior, allowing specialists to discover unhealthy food patterns and understand the user’s lifestyle.
With the aim to automatically recognize a complete diet, we introduce a challenging multi-labeled dataset related to Mediter-ranean diet called FoodCAT. The first type of label provided consists of 115 food classes with an average of 400 images per dish, and the second one consists of 12 food categories with an average of 3800 pictures per class. This dataset will serve as a basis for the development of automatic diet recognition. In this context, deep learning and more specifically, Convolutional Neural Networks (CNNs), currently are state-of-the-art methods for automatic food recognition. In our work, we compare several architectures for image classification, with the purpose of diet recognition. Applying the best model for recognising food categories, we achieve a top-1 accuracy of 72.29%, and top-5 of 97.07%. In a complete diet recognition of dishes from Mediterranean diet, enlarged with the Food-101 dataset for international dishes recognition, we achieve a top-1 accuracy of 68.07%, and top-5 of 89.53%, for a total of 115+101 food classes.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ HBR2016 Serial 2837  
Permanent link to this record
 

 
Author Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari edit  openurl
  Title Fusion of Classifier Predictions for Audio-Visual Emotion Recognition Type Conference Article
  Year 2016 Publication 23rd International Conference on Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this paper is presented a novel multimodal emotion recognition system which is based on the analysis of audio and visual cues. MFCC-based features are extracted from the audio channel and facial landmark geometric relations are
computed from visual data. Both sets of features are learnt separately using state-of-the-art classifiers. In addition, we summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network. Finally, confidence
outputs of all classifiers from all modalities are used to define a new feature space to be learnt for final emotion prediction, in a late fusion/stacking fashion. The conducted experiments on eNTERFACE’05 database show significant performance improvements of our proposed system in comparison to state-of-the-art approaches.
 
  Address Cancun; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference ICPRW  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ NMN2016 Serial 2839  
Permanent link to this record
 

 
Author Iiris Lusi; Sergio Escalera; Gholamreza Anbarjafari edit   pdf
url  openurl
  Title SASE: RGB-Depth Database for Human Head Pose Estimation Type Conference Article
  Year 2016 Publication 14th European Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Slides  
  Address Amsterdam; The Netherlands; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ LEA2016a Serial 2840  
Permanent link to this record
 

 
Author Xavier Perez Sala; Fernando De la Torre; Laura Igual; Sergio Escalera; Cecilio Angulo edit  url
openurl 
  Title Subspace Procrustes Analysis Type Journal Article
  Year 2017 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 121 Issue 3 Pages 327–343  
  Keywords  
  Abstract Procrustes Analysis (PA) has been a popular technique to align and build 2-D statistical models of shapes. Given a set of 2-D shapes PA is applied to remove rigid transformations. Then, a non-rigid 2-D model is computed by modeling (e.g., PCA) the residual. Although PA has been widely used, it has several limitations for modeling 2-D shapes: occluded landmarks and missing data can result in local minima solutions, and there is no guarantee that the 2-D shapes provide a uniform sampling of the 3-D space of rotations for the object. To address previous issues, this paper proposes Subspace PA (SPA). Given several
instances of a 3-D object, SPA computes the mean and a 2-D subspace that can simultaneously model all rigid and non-rigid deformations of the 3-D object. We propose a discrete (DSPA) and continuous (CSPA) formulation for SPA, assuming that 3-D samples of an object are provided. DSPA extends the traditional PA, and produces unbiased 2-D models by uniformly sampling different views of the 3-D object. CSPA provides a continuous approach to uniformly sample the space of 3-D rotations, being more efficient in space and time. Experiments using SPA to learn 2-D models of bodies from motion capture data illustrate the benefits of our approach.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; HuPBA; no proj Approved no  
  Call Number Admin @ si @ PTI2017 Serial 2841  
Permanent link to this record
 

 
Author Frederic Sampedro; Anna Domenech; Sergio Escalera; Ignasi Carrio edit  doi
openurl 
  Title Computing quantitative indicators of structural renal damage in pediatric DMSA scans Type Journal Article
  Year 2017 Publication Revista Española de Medicina Nuclear e Imagen Molecular Abbreviated Journal REMNIM  
  Volume 36 Issue 2 Pages 72-77  
  Keywords  
  Abstract OBJECTIVES:
The proposal and implementation of a computational framework for the quantification of structural renal damage from 99mTc-dimercaptosuccinic acid (DMSA) scans. The aim of this work is to propose, implement, and validate a computational framework for the quantification of structural renal damage from DMSA scans and in an observer-independent manner.
MATERIALS AND METHODS:
From a set of 16 pediatric DMSA-positive scans and 16 matched controls and using both expert-guided and automatic approaches, a set of image-derived quantitative indicators was computed based on the relative size, intensity and histogram distribution of the lesion. A correlation analysis was conducted in order to investigate the association of these indicators with other clinical data of interest in this scenario, including C-reactive protein (CRP), white cell count, vesicoureteral reflux, fever, relative perfusion, and the presence of renal sequelae in a 6-month follow-up DMSA scan.
RESULTS:
A fully automatic lesion detection and segmentation system was able to successfully classify DMSA-positive from negative scans (AUC=0.92, sensitivity=81% and specificity=94%). The image-computed relative size of the lesion correlated with the presence of fever and CRP levels (p<0.05), and a measurement derived from the distribution histogram of the lesion obtained significant performance results in the detection of permanent renal damage (AUC=0.86, sensitivity=100% and specificity=75%).
CONCLUSIONS:
The proposal and implementation of a computational framework for the quantification of structural renal damage from DMSA scans showed a promising potential to complement visual diagnosis and non-imaging indicators.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; no menciona Approved no  
  Call Number Admin @ si @ SDE2017 Serial 2842  
Permanent link to this record
 

 
Author Mikkel Thogersen; Sergio Escalera; Jordi Gonzalez; Thomas B. Moeslund edit  url
openurl 
  Title Segmentation of RGB-D Indoor scenes by Stacking Random Forests and Conditional Random Fields Type Journal Article
  Year 2016 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 80 Issue Pages 208–215  
  Keywords  
  Abstract This paper proposes a technique for RGB-D scene segmentation using Multi-class
Multi-scale Stacked Sequential Learning (MMSSL) paradigm. Following recent trends in state-of-the-art, a base classifier uses an initial SLIC segmentation to obtain superpixels which provide a diminution of data while retaining object boundaries. A series of color and depth features are extracted from the superpixels, and are used in a Conditional Random Field (CRF) to predict superpixel labels. Furthermore, a Random Forest (RF) classifier using random offset features is also used as an input to the CRF, acting as an initial prediction. As a stacked classifier, another Random Forest is used acting on a spatial multi-scale decomposition of the CRF confidence map to correct the erroneous labels assigned by the previous classifier. The model is tested on the popular NYU-v2 dataset.
The approach shows that simple multi-modal features with the power of the MMSSL
paradigm can achieve better performance than state of the art results on the same dataset.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; ISE;MILAB; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ TEG2016 Serial 2843  
Permanent link to this record
 

 
Author Jose Garcia-Rodriguez; Isabelle Guyon; Sergio Escalera; Alexandra Psarrou; Andrew Lewis; Miguel Cazorla edit  doi
openurl 
  Title Editorial: Special Issue on Computational Intelligence for Vision and Robotics Type Journal Article
  Year 2017 Publication Neural Computing and Applications Abbreviated Journal Neural Computing and Applications  
  Volume 28 Issue 5 Pages 853–854  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; no menciona Approved no  
  Call Number Admin @ si @ GGE2017 Serial 2845  
Permanent link to this record
 

 
Author Pejman Rasti; Tonis Uiboupin; Sergio Escalera; Gholamreza Anbarjafari edit  openurl
  Title Convolutional Neural Network Super Resolution for Face Recognition in Surveillance Monitoring Type Conference Article
  Year 2016 Publication 9th Conference on Articulated Motion and Deformable Objects Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Palma de Mallorca; Spain; July 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference AMDO  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ RUE2016 Serial 2846  
Permanent link to this record
 

 
Author Dennis H. Lundtoft; Kamal Nasrollahi; Thomas B. Moeslund; Sergio Escalera edit  doi
openurl 
  Title Spatiotemporal Facial Super-Pixels for Pain Detection Type Conference Article
  Year 2016 Publication 9th Conference on Articulated Motion and Deformable Objects Abbreviated Journal  
  Volume Issue Pages  
  Keywords Facial images; Super-pixels; Spatiotemporal filters; Pain detection  
  Abstract Best student paper award.
Pain detection using facial images is of critical importance in many Health applications. Since pain is a spatiotemporal process, recent works on this topic employ facial spatiotemporal features to detect pain. These systems extract such features from the entire area of the face. In this paper, we show that by employing super-pixels we can divide the face into three regions, in a way that only one of these regions (about one third of the face) contributes to the pain estimation and the other two regions can be discarded. The experimental results on the UNBCMcMaster database show that the proposed system using this single region outperforms state-of-the-art systems in detecting no-pain scenarios, while it reaches comparable results in detecting weak and severe pain scenarios.
 
  Address Palma de Mallorca; Spain; July 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference AMDO  
  Notes HUPBA;MILAB Approved no  
  Call Number Admin @ si @ LNM2016 Serial 2847  
Permanent link to this record
 

 
Author Mark Philip Philipsen; Anders Jorgensen; Thomas B. Moeslund; Sergio Escalera edit  openurl
  Title RGB-D Segmentation of Poultry Entrails Type Conference Article
  Year 2016 Publication 9th Conference on Articulated Motion and Deformable Objects Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Best commercial paper award.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference AMDO  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ PJM2016 Serial 2848  
Permanent link to this record
 

 
Author Sergio Escalera; Mercedes Torres-Torres; Brais Martinez; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon; Georgios Tzimiropoulos; Ciprian Corneanu; Marc Oliu Simón; Mohammad Ali Bagheri; Michel Valstar edit   pdf
doi  openurl
  Title ChaLearn Looking at People and Faces of the World: Face AnalysisWorkshop and Challenge 2016 Type Conference Article
  Year 2016 Publication 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org.  
  Address Las Vegas; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MV; Approved no  
  Call Number ETM2016 Serial 2849  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: