toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Sergio Escalera; Jordi Gonzalez; Xavier Baro; Miguel Reyes; Oscar Lopes; Isabelle Guyon; V. Athitsos; Hugo Jair Escalante edit   pdf
doi  isbn
openurl 
  Title Multi-modal Gesture Recognition Challenge 2013: Dataset and Results Type Conference Article
  Year 2013 Publication 15th ACM International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 445-452  
  Keywords  
  Abstract The recognition of continuous natural gestures is a complex and challenging problem due to the multi-modal nature of involved visual cues (e.g. fingers and lips movements, subtle facial expressions, body pose, etc.), as well as technical limitations such as spatial and temporal resolution and unreliable
depth cues. In order to promote the research advance on this field, we organized a challenge on multi-modal gesture recognition. We made available a large video database of 13; 858 gestures from a lexicon of 20 Italian gesture categories recorded with a KinectTM camera, providing the audio, skeletal model, user mask, RGB and depth images. The focus of the challenge was on user independent multiple gesture learning. There are no resting positions and the gestures are performed in continuous sequences lasting 1-2 minutes, containing between 8 and 20 gesture instances in each sequence. As a result, the dataset contains around 1:720:800 frames. In addition to the 20 main gesture categories, ‘distracter’ gestures are included, meaning that additional audio
and gestures out of the vocabulary are included. The final evaluation of the challenge was defined in terms of the Levenshtein edit distance, where the goal was to indicate the real order of gestures within the sequence. 54 international teams participated in the challenge, and outstanding results
were obtained by the first ranked participants.
 
  Address Sidney; Australia; December 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-2129-7 Medium (down)  
  Area Expedition Conference ICMI  
  Notes HUPBA; ISE; 600.063;MV Approved no  
  Call Number Admin @ si @ EGB2013 Serial 2373  
Permanent link to this record
 

 
Author Victor Ponce; Sergio Escalera; Xavier Baro edit  doi
isbn  openurl
  Title Multi-modal Social Signal Analysis for Predicting Agreement in Conversation Settings Type Conference Article
  Year 2013 Publication 15th ACM International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 495-502  
  Keywords  
  Abstract In this paper we present a non-invasive ambient intelligence framework for the analysis of non-verbal communication applied to conversational settings. In particular, we apply feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues coming from the fields of psychology and observational methodology. We test our methodology over data captured in victim-offender mediation scenarios. Using different state-of-the-art classification approaches, our system achieve upon 75% of recognition predicting agreement among the parts involved in the conversations, using as ground truth the experts opinions.  
  Address Sidney; Australia; December 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-2129-7 Medium (down)  
  Area Expedition Conference ICMI  
  Notes HuPBA;MV Approved no  
  Call Number Admin @ si @ PEB2013 Serial 2488  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: