toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Sergio Escalera; David Masip; Eloi Puertas; Petia Radeva; Oriol Pujol edit  doi
openurl 
  Title Online Error-Correcting Output Codes Type Journal Article
  Year 2011 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 32 Issue 3 Pages 458-467  
  Keywords  
  Abstract IF JCR CCIA 1.303 2009 54/103
This article proposes a general extension of the error correcting output codes framework to the online learning scenario. As a result, the final classifier handles the addition of new classes independently of the base classifier used. In particular, this extension supports the use of both online example incremental and batch classifiers as base learners. The extension of the traditional problem independent codings one-versus-all and one-versus-one is introduced. Furthermore, two new codings are proposed, unbalanced online ECOC and a problem dependent online ECOC. This last online coding technique takes advantage of the problem data for minimizing the number of dichotomizers used in the ECOC framework while preserving a high accuracy. These techniques are validated on an online setting of 11 data sets from UCI database and applied to two real machine vision applications: traffic sign recognition and face recognition. As a result, the online ECOC techniques proposed provide a feasible and robust way for handling new classes using any base classifier.
 
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication North Holland Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0167-8655 ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;OR;HuPBA;MV Approved no  
  Call Number Admin @ si @ EMP2011 Serial (down) 1714  
Permanent link to this record
 

 
Author Bogdan Raducanu; D. Gatica-Perez edit   pdf
doi  openurl
  Title Inferring competitive role patterns in reality TV show through nonverbal analysis Type Journal Article
  Year 2012 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 56 Issue 1 Pages 207-226  
  Keywords  
  Abstract This paper introduces a new facet of social media, namely that depicting social interaction. More concretely, we address this problem from the perspective of nonverbal behavior-based analysis of competitive meetings. For our study, we made use of “The Apprentice” reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status, and predicting the fired candidates. We address this problem by adopting both supervised and unsupervised strategies. The current study was carried out using nonverbal audio cues. Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words. The analysis is based on two types of data: individual and relational measures. Results obtained from the analysis of a full season of the show are promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach has been conveniently compared with the Influence Model, demonstrating its superiority.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1380-7501 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaG2012 Serial (down) 1360  
Permanent link to this record
 

 
Author Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera edit   pdf
openurl 
  Title Deteccion automatica de la dominancia en conversaciones diadicas Type Journal Article
  Year 2010 Publication Escritos de Psicologia Abbreviated Journal EP  
  Volume 3 Issue 2 Pages 41–45  
  Keywords Dominance detection; Non-verbal communication; Visual features  
  Abstract Dominance is referred to the level of influence that a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on the dominance detection of visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers' opinion. Moreover, these indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analyses showed a high correlation and allows the categorization of dominant people in public discussion video sequences.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1989-3809 ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; OR; MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ EMV2010 Serial (down) 1315  
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria; Maria Teresa Anguera edit  doi
openurl 
  Title Automatic Detection of Dominance and Expected Interest Type Journal Article
  Year 2010 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ  
  Volume Issue Pages 12  
  Keywords  
  Abstract Article ID 491819
Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1110-8657 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MILAB;HUPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ EPR2010d Serial (down) 1283  
Permanent link to this record
 

 
Author Fernando Vilariño; Panagiota Spyridonos; Fosca De Iorio; Jordi Vitria; Fernando Azpiroz; Petia Radeva edit   pdf
doi  openurl
  Title Intestinal Motility Assessment With Video Capsule Endoscopy: Automatic Annotation of Phasic Intestinal Contractions Type Journal Article
  Year 2010 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI  
  Volume 29 Issue 2 Pages 246-259  
  Keywords  
  Abstract Intestinal motility assessment with video capsule endoscopy arises as a novel and challenging clinical fieldwork. This technique is based on the analysis of the patterns of intestinal contractions shown in a video provided by an ingestible capsule with a wireless micro-camera. The manual labeling of all the motility events requires large amount of time for offline screening in search of findings with low prevalence, which turns this procedure currently unpractical. In this paper, we propose a machine learning system to automatically detect the phasic intestinal contractions in video capsule endoscopy, driving a useful but not feasible clinical routine into a feasible clinical procedure. Our proposal is based on a sequential design which involves the analysis of textural, color, and blob features together with SVM classifiers. Our approach tackles the reduction of the imbalance rate of data and allows the inclusion of domain knowledge as new stages in the cascade. We present a detailed analysis, both in a quantitative and a qualitative way, by providing several measures of performance and the assessment study of interobserver variability. Our system performs at 70% of sensitivity for individual detection, whilst obtaining equivalent patterns to those of the experts for density of contractions.  
  Address  
  Corporate Author IEEE Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0278-0062 ISBN Medium  
  Area 800 Expedition Conference  
  Notes MILAB;MV;OR;SIAI Approved no  
  Call Number BCNPCL @ bcnpcl @ VSD2010; IAM @ iam @ VSI2010 Serial (down) 1281  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: