toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Mariella Dimiccoli edit   pdf
doi  openurl
  Title Figure-ground segregation: A fully nonlocal approach Type Journal Article
  Year 2016 Publication Vision Research Abbreviated Journal VR  
  Volume 126 Issue Pages 308-317  
  Keywords (down) Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion  
  Abstract We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ Dim2016b Serial 2623  
Permanent link to this record
 

 
Author Ciprian Corneanu; Marc Oliu; Jeffrey F. Cohn; Sergio Escalera edit   pdf
doi  openurl
  Title Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History Type Journal Article
  Year 2016 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 28 Issue 8 Pages 1548-1568  
  Keywords (down) Facial expression; affect; emotion recognition; RGB; 3D; thermal; multimodal  
  Abstract Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ COC2016 Serial 2718  
Permanent link to this record
 

 
Author Mohammad ali Bagheri; Qigang Gao; Sergio Escalera edit  url
doi  openurl
  Title A Genetic-based Subspace Analysis Method for Improving Error-Correcting Output Coding Type Journal Article
  Year 2013 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 46 Issue 10 Pages 2830-2839  
  Keywords (down) Error Correcting Output Codes; Evolutionary computation; Multiclass classification; Feature subspace; Ensemble classification  
  Abstract Two key factors affecting the performance of Error Correcting Output Codes (ECOC) in multiclass classification problems are the independence of binary classifiers and the problem-dependent coding design. In this paper, we propose an evolutionary algorithm-based approach to the design of an application-dependent codematrix in the ECOC framework. The central idea of this work is to design a three-dimensional codematrix, where the third dimension is the feature space of the problem domain. In order to do that, we consider the feature space in the design process of the codematrix with the aim of improving the independence and accuracy of binary classifiers. The proposed method takes advantage of some basic concepts of ensemble classification, such as diversity of classifiers, and also benefits from the evolutionary approach for optimizing the three-dimensional codematrix, taking into account the problem domain. We provide a set of experimental results using a set of benchmark datasets from the UCI Machine Learning Repository, as well as two real multiclass Computer Vision problems. Both sets of experiments are conducted using two different base learners: Neural Networks and Decision Trees. The results show that the proposed method increases the classification accuracy in comparison with the state-of-the-art ECOC coding techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0031-3203 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ BGE2013a Serial 2247  
Permanent link to this record
 

 
Author Marc Bolaños; Alvaro Peris; Francisco Casacuberta; Sergi Solera; Petia Radeva edit   pdf
url  openurl
  Title Egocentric video description based on temporally-linked sequences Type Journal Article
  Year 2018 Publication Journal of Visual Communication and Image Representation Abbreviated Journal JVCIR  
  Volume 50 Issue Pages 205-216  
  Keywords (down) egocentric vision; video description; deep learning; multi-modal learning  
  Abstract Egocentric vision consists in acquiring images along the day from a first person point-of-view using wearable cameras. The automatic analysis of this information allows to discover daily patterns for improving the quality of life of the user. A natural topic that arises in egocentric vision is storytelling, that is, how to understand and tell the story relying behind the pictures.
In this paper, we tackle storytelling as an egocentric sequences description problem. We propose a novel methodology that exploits information from temporally neighboring events, matching precisely the nature of egocentric sequences. Furthermore, we present a new method for multimodal data fusion consisting on a multi-input attention recurrent network. We also release the EDUB-SegDesc dataset. This is the first dataset for egocentric image sequences description, consisting of 1,339 events with 3,991 descriptions, from 55 days acquired by 11 people. Finally, we prove that our proposal outperforms classical attentional encoder-decoder methods for video description.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ BPC2018 Serial 3109  
Permanent link to this record
 

 
Author Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli edit   pdf
url  openurl
  Title Batch-based activity recognition from egocentric photo-streams revisited Type Journal Article
  Year 2018 Publication Pattern Analysis and Applications Abbreviated Journal PAA  
  Volume 21 Issue 4 Pages 953–965  
  Keywords (down) Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks  
  Abstract Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ CMR2018 Serial 3186  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: