toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Fadi Dornaika; Bogdan Raducanu edit  openurl
  Title 3D Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal
  Year 2008 Publication IEEE Transactions on Systems, Man and Cybernetics (Part B) (IEEE) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author (up) Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2008d Serial 1109  
Permanent link to this record
 

 
Author Xavier Baro; Sergio Escalera; Jordi Vitria; Oriol Pujol; Petia Radeva edit  doi
openurl 
  Title Traffic Sign Recognition Using Evolutionary Adaboost Detection and Forest-ECOC Classification Type Journal Article
  Year 2009 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 10 Issue 1 Pages 113–126  
  Keywords  
  Abstract The high variability of sign appearance in uncontrolled environments has made the detection and classification of road signs a challenging problem in computer vision. In this paper, we introduce a novel approach for the detection and classification of traffic signs. Detection is based on a boosted detectors cascade, trained with a novel evolutionary version of Adaboost, which allows the use of large feature spaces. Classification is defined as a multiclass categorization problem. A battery of classifiers is trained to split classes in an Error-Correcting Output Code (ECOC) framework. We propose an ECOC design through a forest of optimal tree structures that are embedded in the ECOC matrix. The novel system offers high performance and better accuracy than the state-of-the-art strategies and is potentially better in terms of noise, affine deformation, partial occlusions, and reduced illumination.  
  Address  
  Corporate Author (up) Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ BEV2008 Serial 1116  
Permanent link to this record
 

 
Author David Masip; Agata Lapedriza; Jordi Vitria edit  doi
openurl 
  Title Boosted Online Learning for Face Recognition Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal TSMCB  
  Volume 39 Issue 2 Pages 530–538  
  Keywords  
  Abstract Face recognition applications commonly suffer from three main drawbacks: a reduced training set, information lying in high-dimensional subspaces, and the need to incorporate new people to recognize. In the recent literature, the extension of a face classifier in order to include new people in the model has been solved using online feature extraction techniques. The most successful approaches of those are the extensions of the principal component analysis or the linear discriminant analysis. In the current paper, a new online boosting algorithm is introduced: a face recognition method that extends a boosting-based classifier by adding new classes while avoiding the need of retraining the classifier each time a new person joins the system. The classifier is learned using the multitask learning principle where multiple verification tasks are trained together sharing the same feature space. The new classes are added taking advantage of the structure learned previously, being the addition of new classes not computationally demanding. The present proposal has been (experimentally) validated with two different facial data sets by comparing our approach with the current state-of-the-art techniques. The results show that the proposed online boosting algorithm fares better in terms of final accuracy. In addition, the global performance does not decrease drastically even when the number of classes of the base problem is multiplied by eight.  
  Address  
  Corporate Author (up) Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1083–4419 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ MLV2009 Serial 1155  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
openurl 
  Title Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal TSMCB  
  Volume 39 Issue 4 Pages 935–944  
  Keywords  
  Abstract Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.  
  Address  
  Corporate Author (up) Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2009a Serial 1218  
Permanent link to this record
 

 
Author Fosca De Iorio; C. Malagelada; Fernando Azpiroz; M. Maluenda; C. Violanti; Laura Igual; Jordi Vitria; Juan R. Malagelada edit  doi
openurl 
  Title Intestinal motor activity, endoluminal motion and transit Type Journal Article
  Year 2009 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT  
  Volume 21 Issue 12 Pages 1264–e119  
  Keywords  
  Abstract A programme for evaluation of intestinal motility has been recently developed based on endoluminal image analysis using computer vision methodology and machine learning techniques. Our aim was to determine the effect of intestinal muscle inhibition on wall motion, dynamics of luminal content and transit in the small bowel. Fourteen healthy subjects ingested the endoscopic capsule (Pillcam, Given Imaging) in fasting conditions. Seven of them received glucagon (4.8 microg kg(-1) bolus followed by a 9.6 microg kg(-1) h(-1) infusion during 1 h) and in the other seven, fasting activity was recorded, as controls. This dose of glucagon has previously shown to inhibit both tonic and phasic intestinal motor activity. Endoluminal image and displacement was analyzed by means of a computer vision programme specifically developed for the evaluation of muscular activity (contractile and non-contractile patterns), intestinal contents, endoluminal motion and transit. Thirty-minute periods before, during and after glucagon infusion were analyzed and compared with equivalent periods in controls. No differences were found in the parameters measured during the baseline (pretest) periods when comparing glucagon and control experiments. During glucagon infusion, there was a significant reduction in contractile activity (0.2 +/- 0.1 vs 4.2 +/- 0.9 luminal closures per min, P < 0.05; 0.4 +/- 0.1 vs 3.4 +/- 1.2% of images with radial wrinkles, P < 0.05) and a significant reduction of endoluminal motion (82 +/- 9 vs 21 +/- 10% of static images, P < 0.05). Endoluminal image analysis, by means of computer vision and machine learning techniques, can reliably detect reduced intestinal muscle activity and motion.  
  Address  
  Corporate Author (up) Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DMA2009 Serial 1251  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: