toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Bogdan Raducanu; Jordi Vitria edit  openurl
  Title Online Nonparametric Discriminant Analysis for Incremental Subspace Learning and Recognition Type Journal
  Year 2008 Publication Pattern Analysis and Applications. Special Issue: Non–Parametric Distance–Based Classification Techniques and Their Applications Abbreviated Journal  
  Volume 11 Issue 3-4 Pages 259–268  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaV2008c Serial 997  
Permanent link to this record
 

 
Author Bogdan Raducanu; Jordi Vitria edit  openurl
  Title Face Recognition by Artificial Vision Systems: A Cognitive Perspective Type Journal
  Year 2008 Publication International Journal of Pattern Recognition and Artificial Intelligence Abbreviated Journal IJPRAI  
  Volume 22 Issue 5 Pages 899–913  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaV2008b Serial 1007  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  openurl
  Title Facial Expression Recognition for HCI Applications Type Book Chapter
  Year 2008 Publication Encyclopedia of Artificial Intelligence Abbreviated Journal  
  Volume II Issue Pages 625–631  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher IGI–Global Publisher Place of Publication Editor Rabuñal  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2008c Serial 1034  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  openurl
  Title 3D Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal
  Year 2008 Publication IEEE Transactions on Systems, Man and Cybernetics (Part B) (IEEE) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2008d Serial 1109  
Permanent link to this record
 

 
Author Bogdan Raducanu; Jordi Vitria; D. Gatica-Perez edit  doi
isbn  openurl
  Title You are Fired! Nonverbal Role Analysis in Competitive Meetings Type Conference Article
  Year 2009 Publication IEEE International Conference on Audio, Speech and Signal Processing Abbreviated Journal  
  Volume Issue Pages 1949–1952  
  Keywords  
  Abstract This paper addresses the problem of social interaction analysis in competitive meetings, using nonverbal cues. For our study, we made use of ldquoThe Apprenticerdquo reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status and predicting the fired candidates. The current study was carried out using nonverbal audio cues. Results obtained from the analysis of a full season of the show, representing around 90 minutes of audio data, are very promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words.  
  Address Taipei, Taiwan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-6149 ISBN 978-1-4244-2353-8 Medium  
  Area Expedition Conference ICASSP  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RVG2009 Serial 1154  
Permanent link to this record
 

 
Author David Masip; Agata Lapedriza; Jordi Vitria edit  doi
openurl 
  Title Boosted Online Learning for Face Recognition Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal TSMCB  
  Volume 39 Issue 2 Pages 530–538  
  Keywords  
  Abstract Face recognition applications commonly suffer from three main drawbacks: a reduced training set, information lying in high-dimensional subspaces, and the need to incorporate new people to recognize. In the recent literature, the extension of a face classifier in order to include new people in the model has been solved using online feature extraction techniques. The most successful approaches of those are the extensions of the principal component analysis or the linear discriminant analysis. In the current paper, a new online boosting algorithm is introduced: a face recognition method that extends a boosting-based classifier by adding new classes while avoiding the need of retraining the classifier each time a new person joins the system. The classifier is learned using the multitask learning principle where multiple verification tasks are trained together sharing the same feature space. The new classes are added taking advantage of the structure learned previously, being the addition of new classes not computationally demanding. The present proposal has been (experimentally) validated with two different facial data sets by comparing our approach with the current state-of-the-art techniques. The results show that the proposed online boosting algorithm fares better in terms of final accuracy. In addition, the global performance does not decrease drastically even when the number of classes of the base problem is multiplied by eight.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1083–4419 ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ MLV2009 Serial 1155  
Permanent link to this record
 

 
Author D. Jayagopi; Bogdan Raducanu; D. Gatica-Perez edit  doi
isbn  openurl
  Title Characterizing conversational group dynamics using nonverbal behaviour Type Conference Article
  Year 2009 Publication 10th IEEE International Conference on Multimedia and Expo Abbreviated Journal  
  Volume Issue Pages 370–373  
  Keywords  
  Abstract This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.  
  Address New York, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1945-7871 ISBN 978-1-4244-4290-4 Medium  
  Area Expedition Conference ICME  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ JRG2009 Serial 1217  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
openurl 
  Title Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal TSMCB  
  Volume 39 Issue 4 Pages 935–944  
  Keywords  
  Abstract Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2009a Serial 1218  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
isbn  openurl
  Title Simultaneous 3D face pose and person-specific shape estimation from a single image using a holistic approach Type Conference Article
  Year 2009 Publication IEEE Workshop on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper presents a new approach for the simultaneous estimation of the 3D pose and specific shape of a previously unseen face from a single image. The face pose is not limited to a frontal view. We describe a holistic approach based on a deformable 3D model and a learned statistical facial texture model. Rather than obtaining a person-specific facial surface, the goal of this work is to compute person-specific 3D face shape in terms of a few control parameters that are used by many applications. The proposed holistic approach estimates the 3D pose parameters as well as the face shape control parameters by registering the warped texture to a statistical face texture, which is carried out by a stochastic and genetic optimizer. The proposed approach has several features that make it very attractive: (i) it uses a single grey-scale image, (ii) it is person-independent, (iii) it is featureless (no facial feature extraction is required), and (iv) its learning stage is easy. The proposed approach lends itself nicely to 3D face tracking and face gesture recognition in monocular videos. We describe extensive experiments that show the feasibility and robustness of the proposed approach.  
  Address Utah, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5790 ISBN 978-1-4244-5497-6 Medium  
  Area Expedition Conference WACV  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2009b Serial 1256  
Permanent link to this record
 

 
Author Bogdan Raducanu; Fadi Dornaika edit  doi
isbn  openurl
  Title Natural Facial Expression Recognition Using Dynamic and Static Schemes Type Conference Article
  Year 2009 Publication 5th International Symposium on Visual Computing Abbreviated Journal  
  Volume 5875 Issue Pages 730–739  
  Keywords  
  Abstract Affective computing is at the core of a new paradigm in HCI and AI represented by human-centered computing. Within this paradigm, it is expected that machines will be enabled with perceiving capabilities, making them aware about users’ affective state. The current paper addresses the problem of facial expression recognition from monocular videos sequences. We propose a dynamic facial expression recognition scheme, which is proven to be very efficient. Furthermore, it is conveniently compared with several static-based systems adopting different magnitude of facial expression. We provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM). We also provide performance evaluations using arbitrary test video sequences.  
  Address Las Vegas, USA  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-10330-8 Medium  
  Area Expedition Conference ISVC  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaD2009 Serial 1257  
Permanent link to this record
 

 
Author Agata Lapedriza edit  openurl
  Title Multitask Learning Techniques for Automatic Face Classification Type Book Whole
  Year 2009 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Automatic face classification is currently a popular research area in Computer Vision. It involves several subproblems, such as subject recognition, gender classification or subject verification.

Current systems of automatic face classification need a large amount of training data to robustly learn a task. However, the collection of labeled data is usually a difficult issue. For this reason, the research on methods that are able to learn from a small sized training set is essential.

The dependency on the abundance of training data is not so evident in human learning processes. We are able to learn from a very small number of examples, given that we use, additionally, some prior knowledge to learn a new task. For example, we frequently find patterns and analogies from other domains to reuse them in new situations, or exploit training data from other experiences.

In computer science, Multitask Learning is a new Machine Learning approach that studies this idea of knowledge transfer among different tasks, to overcome the effects of the small sample sized problem.

This thesis explores, proposes and tests some Multitask Learning methods specially developed for face classification purposes. Moreover, it presents two more contributions dealing with the small sample sized problem, out of the Multitask Learning context. The first one is a method to extract external face features, to be used as an additional information source in automatic face classification problems. The second one is an empirical study on the most suitable face image resolution to perform automatic subject recognition.
 
  Address Barcelona (Spain)  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Vitria;David Masip  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ Lap2009 Serial 1263  
Permanent link to this record
 

 
Author Bogdan Raducanu; Jordi Vitria; Ales Leonardis edit  url
doi  openurl
  Title Online pattern recognition and machine learning techniques for computer-vision: Theory and applications Type Journal Article
  Year 2010 Publication Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 28 Issue 7 Pages 1063–1064  
  Keywords  
  Abstract (Editorial for the Special Issue on Online pattern recognition and machine learning techniques)
In real life, visual learning is supposed to be a continuous process. This paradigm has found its way also in artificial vision systems. There is an increasing trend in pattern recognition represented by online learning approaches, which aims at continuously updating the data representation when new information arrives. Starting with a minimal dataset, the initial knowledge is expanded by incorporating incoming instances, which may have not been previously available or foreseen at the system’s design stage. An interesting characteristic of this strategy is that the train and test phases take place simultaneously. Given the increasing interest in this subject, the aim of this special issue is to be a landmark event in the development of online learning techniques and their applications with the hope that it will capture the interest of a wider audience and will attract even more researchers. We received 19 contributions, of which 9 have been accepted for publication, after having been subjected to usual peer review process.
 
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0262-8856 ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RVL2010 Serial 1280  
Permanent link to this record
 

 
Author Mario Rojas; David Masip; A. Todorov; Jordi Vitria edit  doi
isbn  openurl
  Title Automatic Point-based Facial Trait Judgments Evaluation Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2715–2720  
  Keywords  
  Abstract Humans constantly evaluate the personalities of other people using their faces. Facial trait judgments have been studied in the psychological field, and have been determined to influence important social outcomes of our lives, such as elections outcomes and social relationships. Recent work on textual descriptions of faces has shown that trait judgments are highly correlated. Further, behavioral studies suggest that two orthogonal dimensions, valence and dominance, can describe the basis of the human judgments from faces. In this paper, we used a corpus of behavioral data of judgments on different trait dimensions to automatically learn a trait predictor from facial pixel images. We study whether trait evaluations performed by humans can be learned using machine learning classifiers, and used later in automatic evaluations of new facial images. The experiments performed using local point-based descriptors show promising results in the evaluation of the main traits.  
  Address San Francisco CA, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RMT2010 Serial 1282  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
isbn  openurl
  Title Single Snapshot 3D Head Pose Initialization for Tracking in Human Robot Interaction Scenario Type Conference Article
  Year 2010 Publication 1st International Workshop on Computer Vision for Human-Robot Interaction Abbreviated Journal  
  Volume Issue Pages 32–39  
  Keywords 1st International Workshop on Computer Vision for Human-Robot Interaction, in conjunction with IEEE CVPR 2010  
  Abstract This paper presents an automatic 3D head pose initialization scheme for a real-time face tracker with application to human-robot interaction. It has two main contributions. First, we propose an automatic 3D head pose and person specific face shape estimation, based on a 3D deformable model. The proposed approach serves to initialize our realtime 3D face tracker. What makes this contribution very attractive is that the initialization step can cope with faces
under arbitrary pose, so it is not limited only to near-frontal views. Second, the previous framework is used to develop an application in which the orientation of an AIBO’s camera can be controlled through the imitation of user’s head pose.
In our scenario, this application is used to build panoramic images from overlapping snapshots. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
 
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference CVPRW  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2010a Serial 1309  
Permanent link to this record
 

 
Author Bogdan Raducanu; D. Gatica-Perez edit   pdf
doi  openurl
  Title Inferring competitive role patterns in reality TV show through nonverbal analysis Type Journal Article
  Year 2012 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 56 Issue 1 Pages 207-226  
  Keywords  
  Abstract This paper introduces a new facet of social media, namely that depicting social interaction. More concretely, we address this problem from the perspective of nonverbal behavior-based analysis of competitive meetings. For our study, we made use of “The Apprentice” reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status, and predicting the fired candidates. We address this problem by adopting both supervised and unsupervised strategies. The current study was carried out using nonverbal audio cues. Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words. The analysis is based on two types of data: individual and relational measures. Results obtained from the analysis of a full season of the show are promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach has been conveniently compared with the Influence Model, demonstrating its superiority.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1380-7501 ISBN Medium  
  Area Expedition Conference  
  Notes (up) OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaG2012 Serial 1360  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: