toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Meysam Madadi; Sergio Escalera; Xavier Baro; Jordi Gonzalez edit   pdf
doi  openurl
  Title End-to-end Global to Local CNN Learning for Hand Pose Recovery in Depth data Type Journal Article
  Year 2022 Publication IET Computer Vision Abbreviated Journal IETCV  
  Volume 16 Issue 1 Pages 50-66  
  Keywords Computer vision; data acquisition; human computer interaction; learning (artificial intelligence); pose estimation  
  Abstract Despite recent advances in 3D pose estimation of human hands, especially thanks to the advent of CNNs and depth cameras, this task is still far from being solved. This is mainly due to the highly non-linear dynamics of fingers, which make hand model training a challenging task. In this paper, we exploit a novel hierarchical tree-like structured CNN, in which branches are trained to become specialized in predefined subsets of hand joints, called local poses. We further fuse local pose features, extracted from hierarchical CNN branches, to learn higher order dependencies among joints in the final pose by end-to-end training. Lastly, the loss function used is also defined to incorporate appearance and physical constraints about doable hand motion and deformation. Finally, we introduce a non-rigid data augmentation approach to increase the amount of training depth data. Experimental results suggest that feeding a tree-shaped CNN, specialized in local poses, into a fusion network for modeling joints correlations and dependencies, helps to increase the precision of final estimations, outperforming state-of-the-art results on NYU and SyntheticHand datasets.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ MEB2022 Serial 3652  
Permanent link to this record
 

 
Author Victor M. Campello; Polyxeni Gkontra; Cristian Izquierdo; Carlos Martin-Isla; Alireza Sojoudi; Peter M. Full; Klaus Maier-Hein; Yao Zhang; Zhiqiang He; Jun Ma; Mario Parreno; Alberto Albiol; Fanwei Kong; Shawn C. Shadden; Jorge Corral Acero; Vaanathi Sundaresan; Mina Saber; Mustafa Elattar; Hongwei Li; Bjoern Menze; Firas Khader; Christoph Haarburger; Cian M. Scannell; Mitko Veta; Adam Carscadden; Kumaradevan Punithakumar; Xiao Liu; Sotirios A. Tsaftaris; Xiaoqiong Huang; Xin Yang; Lei Li; Xiahai Zhuang; David Vilades; Martin L. Descalzo; Andrea Guala; Lucia La Mura; Matthias G. Friedrich; Ria Garg; Julie Lebel; Filipe Henriques; Mahir Karakas; Ersin Cavus; Steffen E. Petersen; Sergio Escalera; Santiago Segui; Jose F. Rodriguez Palomares; Karim Lekadir edit  url
doi  openurl
  Title Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge Type Journal Article
  Year 2021 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI  
  Volume 40 Issue 12 Pages 3543-3554  
  Keywords  
  Abstract The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ CGI2021 Serial 3653  
Permanent link to this record
 

 
Author Meysam Madadi; Hugo Bertiche; Sergio Escalera edit   pdf
doi  openurl
  Title Deep unsupervised 3D human body reconstruction from a sparse set of landmarks Type Journal Article
  Year 2021 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 129 Issue Pages 2499–2512  
  Keywords  
  Abstract In this paper we propose the first deep unsupervised approach in human body reconstruction to estimate body surface from a sparse set of landmarks, so called DeepMurf. We apply a denoising autoencoder to estimate missing landmarks. Then we apply an attention model to estimate body joints from landmarks. Finally, a cascading network is applied to regress parameters of a statistical generative model that reconstructs body. Our set of proposed loss functions allows us to train the network in an unsupervised way. Results on four public datasets show that our approach accurately reconstructs the human body from real world mocap data.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ MBE2021 Serial 3654  
Permanent link to this record
 

 
Author Swathikiran Sudhakaran; Sergio Escalera;Oswald Lanz edit   pdf
url  doi
openurl 
  Title Learning to Recognize Actions on Objects in Egocentric Video with Attention Dictionaries Type Journal Article
  Year 2021 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume Issue Pages  
  Keywords  
  Abstract We present EgoACO, a deep neural architecture for video action recognition that learns to pool action-context-object descriptors from frame level features by leveraging the verb-noun structure of action labels in egocentric video datasets. The core component of EgoACO is class activation pooling (CAP), a differentiable pooling operation that combines ideas from bilinear pooling for fine-grained recognition and from feature learning for discriminative localization. CAP uses self-attention with a dictionary of learnable weights to pool from the most relevant feature regions. Through CAP, EgoACO learns to decode object and scene context descriptors from video frame features. For temporal modeling in EgoACO, we design a recurrent version of class activation pooling termed Long Short-Term Attention (LSTA). LSTA extends convolutional gated LSTM with built-in spatial attention and a re-designed output gate. Action, object and context descriptors are fused by a multi-head prediction that accounts for the inter-dependencies between noun-verb-action structured labels in egocentric video datasets. EgoACO features built-in visual explanations, helping learning and interpretation. Results on the two largest egocentric action recognition datasets currently available, EPIC-KITCHENS and EGTEA, show that by explicitly decoding action-context-object descriptors, EgoACO achieves state-of-the-art recognition performance.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ SEL2021 Serial 3656  
Permanent link to this record
 

 
Author Fatemeh Noroozi; Ciprian Corneanu; Dorota Kamińska; Tomasz Sapiński; Sergio Escalera; Gholamreza Anbarjafari edit   pdf
url  openurl
  Title Survey on Emotional Body Gesture Recognition Type Journal Article
  Year 2021 Publication IEEE Transactions on Affective Computing Abbreviated Journal TAC  
  Volume 12 Issue 2 Pages 505 - 523  
  Keywords  
  Abstract Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound, recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as “body language” and comment general aspects as gender differences and culture dependence. We then define a complete framework for automatic emotional body gesture recognition. We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D. We then comment the recent literature related to representation learning and emotion recognition from images of emotionally expressive gestures. We also discuss multi-modal approaches that combine speech or face with body gestures for improved emotion recognition. While pre-processing methodologies (e.g. human detection and pose estimation) are nowadays mature technologies fully developed for robust large scale analysis, we show that for emotion recognition the quantity of labelled data is scarce, there is no agreement on clearly defined output spaces and the representations are shallow and largely based on naive geometrical representations.  
  Address (up)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ NCK2021 Serial 3657  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: