toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Rain Eric Haamer; Kaustubh Kulkarni; Nasrin Imanpour; Mohammad Ahsanul Haque; Egils Avots; Michelle Breisch; Kamal Nasrollahi; Sergio Escalera; Cagri Ozcinar; Xavier Baro; Ahmad R. Naghsh-Nilchi; Thomas B. Moeslund; Gholamreza Anbarjafari edit   pdf
doi  openurl
  Title Changes in Facial Expression as Biometric: A Database and Benchmarks of Identification Type Conference Article
  Year 2018 Publication 8th International Workshop on Human Behavior Understanding Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Facial dynamics can be considered as unique signatures for discrimination between people. These have started to become important topic since many devices have the possibility of unlocking using face recognition or verification. In this work, we evaluate the efficacy of the transition frames of video in emotion as compared to the peak emotion frames for identification. For experiments with transition frames we extract features from each frame of the video from a fine-tuned VGG-Face Convolutional Neural Network (CNN) and geometric features from facial landmark points. To model the temporal context of the transition frames we train a Long-Short Term Memory (LSTM) on the geometric and the CNN features. Furthermore, we employ two fusion strategies: first, an early fusion, in which the geometric and the CNN features are stacked and fed to the LSTM. Second, a late fusion, in which the prediction of the LSTMs, trained independently on the two features, are stacked and used with a Support Vector Machine (SVM). Experimental results show that the late fusion strategy gives the best results and the transition frames give better identification results as compared to the peak emotion frames.  
  Address Xian; China; May 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FGW  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ HKI2018 Serial 3118  
Permanent link to this record
 

 
Author Ciprian Corneanu; Meysam Madadi; Sergio Escalera; Aleix Martinez edit   pdf
openurl 
  Title Explainable Early Stopping for Action Unit Recognition Type Conference Article
  Year 2020 Publication Faces and Gestures in E-health and welfare workshop Abbreviated Journal  
  Volume Issue Pages 693-699  
  Keywords  
  Abstract A common technique to avoid overfitting when training deep neural networks (DNN) is to monitor the performance in a dedicated validation data partition and to stop
training as soon as it saturates. This only focuses on what the model does, while completely ignoring what happens inside it.
In this work, we open the “black-box” of DNN in order to perform early stopping. We propose to use a novel theoretical framework that analyses meso-scale patterns in the topology of the functional graph of a network while it trains. Based on it,
we decide when it transitions from learning towards overfitting in a more explainable way. We exemplify the benefits of this approach on a state-of-the art custom DNN that jointly learns local representations and label structure employing an ensemble of dedicated subnetworks. We show that it is practically equivalent in performance to early stopping with patience, the standard early stopping algorithm in the literature. This proves beneficial for AU recognition performance and provides new insights into how learning of AUs occurs in DNNs.
 
  Address Virtual; November 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FGW  
  Notes HUPBA; Approved no  
  Call Number Admin @ si @ CME2020 Serial 3514  
Permanent link to this record
 

 
Author Anna Esposito; Terry Amorese; Nelson Maldonato; Alessandro Vinciarelli; Maria Ines Torres; Sergio Escalera; Gennaro Cordasco edit  openurl
  Title Seniors’ ability to decode differently aged facial emotional expressions Type Conference Article
  Year 2020 Publication Faces and Gestures in E-health and welfare workshop Abbreviated Journal  
  Volume Issue Pages 716-722  
  Keywords  
  Abstract  
  Address Virtual; November 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FGW  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ EAM2020 Serial 3515  
Permanent link to this record
 

 
Author Anna Esposito; Italia Cirillo; Antonietta Esposito; Leopoldina Fortunati; Gian Luca Foresti; Sergio Escalera; Nikolaos Bourbakis edit  openurl
  Title Impairments in decoding facial and vocal emotional expressions in high functioning autistic adults and adolescents Type Conference Article
  Year 2020 Publication Faces and Gestures in E-health and welfare workshop Abbreviated Journal  
  Volume Issue Pages 667-674  
  Keywords  
  Abstract  
  Address Virtual; November 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FGW  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ ECE2020 Serial 3516  
Permanent link to this record
 

 
Author Bhaskar Chakraborty; Ognjen Rudovic; Jordi Gonzalez edit  openurl
  Title View-Invariant Human-Body Detection with Extension to Human Action Recognition using Component-Wise HMM of Body Parts Type Conference Article
  Year 2008 Publication 8th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Amsterdam; The Netherlands  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes ISE Approved no  
  Call Number ISE @ ise @ CRG2008 Serial 1113  
Permanent link to this record
 

 
Author Gemma Roig; Xavier Boix; F. de la Torre; Joan Serrat; C. Vilella edit  doi
openurl 
  Title Hierarchical CRF with product label spaces for parts-based Models Type Conference Article
  Year 2011 Publication IEEE Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages 657-664  
  Keywords Shape; Computational modeling; Principal component analysis; Random variables; Color; Upper bound; Facial features  
  Abstract Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.  
  Address Santa Barbara, CA, USA, 2011  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RBT2011 Serial 1862  
Permanent link to this record
 

 
Author Julio C. S. Jacques Junior; Xavier Baro; Sergio Escalera edit   pdf
doi  openurl
  Title Exploiting feature representations through similarity learning and ranking aggregation for person re-identification Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Person re-identification has received special attentionby the human analysis community in the last few years.To address the challenges in this field, many researchers haveproposed different strategies, which basically exploit eithercross-view invariant features or cross-view robust metrics. Inthis work we propose to combine different feature representationsthrough ranking aggregation. Spatial information, whichpotentially benefits the person matching, is represented usinga 2D body model, from which color and texture informationare extracted and combined. We also consider contextualinformation (background and foreground data), automaticallyextracted via Deep Decompositional Network, and the usage ofConvolutional Neural Network (CNN) features. To describe thematching between images we use the polynomial feature map,also taking into account local and global information. Finally,the Stuart ranking aggregation method is employed to combinecomplementary ranking lists obtained from different featurerepresentations. Experimental results demonstrated that weimprove the state-of-the-art on VIPeR and PRID450s datasets,achieving 58.77% and 71.56% on top-1 rank recognitionrate, respectively, as well as obtaining competitive results onCUHK01 dataset.  
  Address Washington; DC; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; 602.143 Approved no  
  Call Number Admin @ si @ JBE2017 Serial 2923  
Permanent link to this record
 

 
Author Iiris Lusi; Julio C. S. Jacques Junior; Jelena Gorbova; Xavier Baro; Sergio Escalera; Hasan Demirel; Juri Allik; Cagri Ozcinar; Gholamreza Anbarjafari edit  doi
openurl 
  Title Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation: Databases Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this work two databases for the Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation1 are introduced. Head pose estimation paired with and detailed emotion recognition have become very important in relation to human-computer interaction. The 3D head pose database, SASE, is a 3D database acquired with Microsoft Kinect 2 camera, including RGB and depth information of different head poses which is composed by a total of 30000 frames with annotated markers, including 32 male and 18 female subjects. For the dominant and complementary emotion database, iCVMEFED, includes 31250 images with different emotions of 115 subjects whose gender distribution is almost uniform. For each subject there are 5 samples. The emotions are composed by 7 basic emotions plus neutral, being defined as complementary and dominant pairs. The emotion associated to the images were labeled with the support of psychologists.  
  Address Washington; DC; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; no menciona Approved no  
  Call Number Admin @ si @ LJG2017 Serial 2924  
Permanent link to this record
 

 
Author Chirster Loob; Pejman Rasti; Iiris Lusi; Julio C. S. Jacques Junior; Xavier Baro; Sergio Escalera; Tomasz Sapinski; Dorota Kaminska; Gholamreza Anbarjafari edit   pdf
doi  openurl
  Title Dominant and Complementary Multi-Emotional Facial Expression Recognition Using C-Support Vector Classification Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We are proposing a new facial expression recognition model which introduces 30+ detailed facial expressions recognisable by any artificial intelligence interacting with a human. Throughout this research, we introduce two categories for the emotions, namely, dominant emotions and complementary emotions. In this research paper the complementary emotion is recognised by using the eye region if the dominant emotion is angry, fearful or sad, and if the dominant emotion is disgust or happiness the complementary emotion is mainly conveyed by the mouth. In order to verify the tagged dominant and complementary emotions, randomly chosen people voted for the recognised multi-emotional facial expressions. The average results of voting are showing that 73.88% of the voters agree on the correctness of the recognised multi-emotional facial expressions.  
  Address Washington; DC; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; no menciona Approved no  
  Call Number Admin @ si @ LRL2017 Serial 2925  
Permanent link to this record
 

 
Author Meysam Madadi; Sergio Escalera; Alex Carruesco; Carlos Andujar; Xavier Baro; Jordi Gonzalez edit   pdf
doi  openurl
  Title Occlusion Aware Hand Pose Recovery from Sequences of Depth Images Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. Results on a synthetic, highly-occluded dataset demonstrate that the proposed method outperforms most recent pose recovering approaches, including those based on CNNs.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; ISE; 602.143; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ MEC2017 Serial 2970  
Permanent link to this record
 

 
Author Maryam Asadi-Aghbolaghi; Albert Clapes; Marco Bellantonio; Hugo Jair Escalante; Victor Ponce; Xavier Baro; Isabelle Guyon; Shohreh Kasaei; Sergio Escalera edit   pdf
openurl 
  Title A survey on deep learning based approaches for action and gesture recognition in image sequences Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning
for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions.
We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.
 
  Address Washington; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ ACB2017b Serial 2982  
Permanent link to this record
 

 
Author Eirikur Agustsson; Radu Timofte; Sergio Escalera; Xavier Baro; Isabelle Guyon; Rasmus Rothe edit   pdf
doi  openurl
  Title Apparent and real age estimation in still images with deep residual regressors on APPA-REAL database Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract After decades of research, the real (biological) age estimation from a single face image reached maturity thanks to the availability of large public face databases and impressive accuracies achieved by recently proposed methods.
The estimation of “apparent age” is a related task concerning the age perceived by human observers. Significant advances have been also made in this new research direction with the recent Looking At People challenges. In this paper we make several contributions to age estimation research. (i) We introduce APPA-REAL, a large face image database with both real and apparent age annotations. (ii) We study the relationship between real and apparent age. (iii) We develop a residual age regression method to further improve the performance. (iv) We show that real age estimation can be successfully tackled as an apparent age estimation followed by an apparent to real age residual regression. (v) We graphically reveal the facial regions on which the CNN focuses in order to perform apparent and real age estimation tasks.
 
  Address Washington;USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; no menciona Approved no  
  Call Number Admin @ si @ ATE2017 Serial 3013  
Permanent link to this record
 

 
Author Mohammad A. Haque; Ruben B. Bautista; Kamal Nasrollahi; Sergio Escalera; Christian B. Laursen; Ramin Irani; Ole K. Andersen; Erika G. Spaich; Kaustubh Kulkarni; Thomas B. Moeslund; Marco Bellantonio; Golamreza Anbarjafari; Fatemeh Noroozi edit   pdf
doi  openurl
  Title Deep Multimodal Pain Recognition: A Database and Comparision of Spatio-Temporal Visual Modalities, Faces and Gestures Type Conference Article
  Year 2018 Publication 13th IEEE Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages 250 - 257  
  Keywords  
  Abstract Pain is a symptom of many disorders associated with actual or potential tissue damage in human body. Managing pain is not only a duty but also highly cost prone. The most primitive state of pain management is the assessment of pain. Traditionally it was accomplished by self-report or visual inspection by experts. However, automatic pain assessment systems from facial videos are also rapidly evolving due to the need of managing pain in a robust and cost effective way. Among different challenges of automatic pain assessment from facial video data two issues are increasingly prevalent: first, exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data on shallow learning scenarios. However, employing deep learning techniques for spatio-temporal analysis considering Depth (D) and Thermal (T) along with RGB has high potential in this area. In this paper, we present the first state-of-the-art publicly available database, 'Multimodal Intensity Pain (MIntPAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate.  
  Address Xian; China; May 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ HBN2018 Serial 3117  
Permanent link to this record
 

 
Author Julio C. S. Jacques Junior; Cagri Ozcinar; Marina Marjanovic; Xavier Baro; Gholamreza Anbarjafari; Sergio Escalera edit   pdf
url  doi
openurl 
  Title On the effect of age perception biases for real age regression Type Conference Article
  Year 2019 Publication 14th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Automatic age estimation from facial images represents an important task in computer vision. This paper analyses the effect of gender, age, ethnic, makeup and expression attributes of faces as sources of bias to improve deep apparent age prediction. Following recent works where it is shown that apparent age labels benefit real age estimation, rather than direct real to real age regression, our main contribution is the integration, in an end-to-end architecture, of face attributes for apparent age prediction with an additional loss for real age regression. Experimental results on the APPA-REAL dataset indicate the proposed network successfully take advantage of the adopted attributes to improve both apparent and real age estimation. Our model outperformed a state-of-the-art architecture proposed to separately address apparent and real age regression. Finally, we present preliminary results and discussion of a proof of concept application using the proposed model to regress the apparent age of an individual based on the gender of an external observer.  
  Address Lille; France; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ JOM2019 Serial 3262  
Permanent link to this record
 

 
Author Daniel Sanchez; Meysam Madadi; Marc Oliu; Sergio Escalera edit   pdf
url  doi
openurl 
  Title Multi-task human analysis in still images: 2D/3D pose, depth map, and multi-part segmentation Type Conference Article
  Year 2019 Publication 14th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract While many individual tasks in the domain of human analysis have recently received an accuracy boost from deep learning approaches, multi-task learning has mostly been ignored due to a lack of data. New synthetic datasets are being released, filling this gap with synthetic generated data. In this work, we analyze four related human analysis tasks in still images in a multi-task scenario by leveraging such datasets. Specifically, we study the correlation of 2D/3D pose estimation, body part segmentation and full-body depth estimation. These tasks are learned via the well-known Stacked Hourglass module such that each of the task-specific streams shares information with the others. The main goal is to analyze how training together these four related tasks can benefit each individual task for a better generalization. Results on the newly released SURREAL dataset show that all four tasks benefit from the multi-task approach, but with different combinations of tasks: while combining all four tasks improves 2D pose estimation the most, 2D pose improves neither 3D pose nor full-body depth estimation. On the other hand 2D parts segmentation can benefit from 2D pose but not from 3D pose. In all cases, as expected, the maximum improvement is achieved on those human body parts that show more variability in terms of spatial distribution, appearance and shape, e.g. wrists and ankles.  
  Address Lille; France; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down) FG  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ SMO2019 Serial 3326  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: