toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari edit  doi
openurl 
  Title Audio-Visual Emotion Recognition in Video Clips Type Journal Article
  Year (down) 2019 Publication IEEE Transactions on Affective Computing Abbreviated Journal TAC  
  Volume 10 Issue 1 Pages 60-75  
  Keywords  
  Abstract This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e. distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases.  
  Address 1 Jan.-March 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; 602.143; 602.133 Approved no  
  Call Number Admin @ si @ NMN2017 Serial 3011  
Permanent link to this record
 

 
Author Ikechukwu Ofodile; Ahmed Helmi; Albert Clapes; Egils Avots; Kerttu Maria Peensoo; Sandhra Mirella Valdma; Andreas Valdmann; Heli Valtna Lukner; Sergey Omelkov; Sergio Escalera; Cagri Ozcinar; Gholamreza Anbarjafari edit  url
doi  openurl
  Title Action recognition using single-pixel time-of-flight detection Type Journal Article
  Year (down) 2019 Publication Entropy Abbreviated Journal ENTROPY  
  Volume 21 Issue 4 Pages 414  
  Keywords single pixel single photon image acquisition; time-of-flight; action recognition  
  Abstract Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject’s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene.
Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47% accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent
neural network.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ OHC2019 Serial 3319  
Permanent link to this record
 

 
Author Juanjo Rubio; Takahiro Kashiwa; Teera Laiteerapong; Wenlong Deng; Kohei Nagai; Sergio Escalera; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger edit  url
doi  openurl
  Title Multi-class structural damage segmentation using fully convolutional networks Type Journal Article
  Year (down) 2019 Publication Computers in Industry Abbreviated Journal COMPUTIND  
  Volume 112 Issue Pages 103121  
  Keywords Bridge damage detection; Deep learning; Semantic segmentation  
  Abstract Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ RKL2019 Serial 3315  
Permanent link to this record
 

 
Author Mohammad Naser Sabet; Pau Buch Cardona; Egils Avots; Kamal Nasrollahi; Sergio Escalera; Thomas B. Moeslund; Gholamreza Anbarjafari edit  url
doi  openurl
  Title Privacy-Constrained Biometric System for Non-cooperative Users Type Journal Article
  Year (down) 2019 Publication Entropy Abbreviated Journal ENTROPY  
  Volume 21 Issue 11 Pages 1033  
  Keywords biometric recognition; multimodal-based human identification; privacy; deep learning  
  Abstract With the consolidation of the new data protection regulation paradigm for each individual within the European Union (EU), major biometric technologies are now confronted with many concerns related to user privacy in biometric deployments. When individual biometrics are disclosed, the sensitive information about his/her personal data such as financial or health are at high risk of being misused or compromised. This issue can be escalated considerably over scenarios of non-cooperative users, such as elderly people residing in care homes, with their inability to interact conveniently and securely with the biometric system. The primary goal of this study is to design a novel database to investigate the problem of automatic people recognition under privacy constraints. To do so, the collected data-set contains the subject’s hand and foot traits and excludes the face biometrics of individuals in order to protect their privacy. We carried out extensive simulations using different baseline methods, including deep learning. Simulation results show that, with the spatial features extracted from the subject sequence in both individual hand or foot videos, state-of-the-art deep models provide promising recognition performance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ NBA2019 Serial 3313  
Permanent link to this record
 

 
Author Reza Azad; Maryam Asadi-Aghbolaghi; Shohreh Kasaei; Sergio Escalera edit  doi
openurl 
  Title Dynamic 3D Hand Gesture Recognition by Learning Weighted Depth Motion Maps Type Journal Article
  Year (down) 2019 Publication IEEE Transactions on Circuits and Systems for Video Technology Abbreviated Journal TCSVT  
  Volume 29 Issue 6 Pages 1729-1740  
  Keywords Hand gesture recognition; Multilevel temporal sampling; Weighted depth motion map; Spatio-temporal description; VLAD encoding  
  Abstract Hand gesture recognition from sequences of depth maps is a challenging computer vision task because of the low inter-class and high intra-class variability, different execution rates of each gesture, and the high articulated nature of human hand. In this paper, a multilevel temporal sampling (MTS) method is first proposed that is based on the motion energy of key-frames of depth sequences. As a result, long, middle, and short sequences are generated that contain the relevant gesture information. The MTS results in increasing the intra-class similarity while raising the inter-class dissimilarities. The weighted depth motion map (WDMM) is then proposed to extract the spatio-temporal information from generated summarized sequences by an accumulated weighted absolute difference of consecutive frames. The histogram of gradient (HOG) and local binary pattern (LBP) are exploited to extract features from WDMM. The obtained results define the current state-of-the-art on three public benchmark datasets of: MSR Gesture 3D, SKIG, and MSR Action 3D, for 3D hand gesture recognition. We also achieve competitive results on NTU action dataset.  
  Address June 2019,  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ AAK2018 Serial 3213  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: