toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Bogdan Raducanu; Jordi Vitria; D. Gatica-Perez edit  doi
isbn  openurl
  Title You are Fired! Nonverbal Role Analysis in Competitive Meetings Type Conference Article
  Year 2009 Publication IEEE International Conference on Audio, Speech and Signal Processing Abbreviated Journal  
  Volume Issue Pages 1949–1952  
  Keywords  
  Abstract This paper addresses the problem of social interaction analysis in competitive meetings, using nonverbal cues. For our study, we made use of ldquoThe Apprenticerdquo reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status and predicting the fired candidates. The current study was carried out using nonverbal audio cues. Results obtained from the analysis of a full season of the show, representing around 90 minutes of audio data, are very promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words.  
  Address Taipei, Taiwan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1520-6149 ISBN 978-1-4244-2353-8 Medium  
  Area Expedition Conference ICASSP  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RVG2009 Serial 1154  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Ferran Diego; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Automatic Ground-truthing using video registration for on-board detection algorithms Type Conference Article
  Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 4389 - 4392  
  Keywords  
  Abstract Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.  
  Address Cairo, Egypt  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1522-4880 ISBN 978-1-4244-5653-6 Medium  
  Area Expedition Conference ICIP  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ ADS2009 Serial 1201  
Permanent link to this record
 

 
Author Angel Sappa; Mohammad Rouhani edit  doi
isbn  openurl
  Title Efficient Distance Estimation for Fitting Implicit Quadric Surfaces Type Conference Article
  Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 3521–3524  
  Keywords  
  Abstract This paper presents a novel approach for estimating the shortest Euclidean distance from a given point to the corresponding implicit quadric fitting surface. It first estimates the orthogonal orientation to the surface from the given point; then the shortest distance is directly estimated by intersecting the implicit surface with a line passing through the given point according to the estimated orthogonal orientation. The proposed orthogonal distance estimation is easily obtained without increasing computational complexity; hence it can be used in error minimization surface fitting frameworks. Comparisons of the proposed metric with previous approaches are provided to show both improvements in CPU time as well as in the accuracy of the obtained results. Surfaces fitted by using the proposed geometric distance estimation and state of the art metrics are presented to show the viability of the proposed approach.  
  Address Cairo, Egypt  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1522-4880 ISBN 978-1-4244-5653-6 Medium  
  Area Expedition Conference ICIP  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SaR2009 Serial 1232  
Permanent link to this record
 

 
Author Carlo Gatta; Petia Radeva edit  doi
isbn  openurl
  Title Bilateral Enhancers Type Conference Article
  Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 3161-3165  
  Keywords  
  Abstract Ten years ago the concept of bilateral filtering (BF) became popular in the image processing community. The core of the idea is to blend the effect of a spatial filter, as e.g. the Gaussian filter, with the effect of a filter that acts on image values. The two filters acts on orthogonal domains of a picture: the 2D lattice of the image support and the intensity (or color) domain. The BF approach is an intuitive way to blend these two filters giving rise to algorithms that perform difficult tasks requiring a relatively simple design. In this paper we extend the concept of BF, proposing the bilateral enhancers (BE). We show how to design proper functions to obtain an edge-preserving smoothing and a selective sharpening. Moreover, we show that the proposed algorithm can perform edge-preserving smoothing and selective sharpening simultaneously in a single filtering.  
  Address Cairo, Egypt  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1522-4880 ISBN 978-1-4244-5653-6 Medium  
  Area Expedition Conference ICIP  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ GaR2009b Serial 1243  
Permanent link to this record
 

 
Author Xavier Baro; Sergio Escalera; Jordi Vitria; Oriol Pujol; Petia Radeva edit  doi
openurl 
  Title Traffic Sign Recognition Using Evolutionary Adaboost Detection and Forest-ECOC Classification Type Journal Article
  Year 2009 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 10 Issue 1 Pages 113–126  
  Keywords  
  Abstract The high variability of sign appearance in uncontrolled environments has made the detection and classification of road signs a challenging problem in computer vision. In this paper, we introduce a novel approach for the detection and classification of traffic signs. Detection is based on a boosted detectors cascade, trained with a novel evolutionary version of Adaboost, which allows the use of large feature spaces. Classification is defined as a multiclass categorization problem. A battery of classifiers is trained to split classes in an Error-Correcting Output Code (ECOC) framework. We propose an ECOC design through a forest of optimal tree structures that are embedded in the ECOC matrix. The novel system offers high performance and better accuracy than the state-of-the-art strategies and is potentially better in terms of noise, affine deformation, partial occlusions, and reduced illumination.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ BEV2008 Serial 1116  
Permanent link to this record
 

 
Author S. Chanda; Umapada Pal; Oriol Ramos Terrades edit  doi
openurl 
  Title Word-Wise Thai and Roman Script Identification Type Journal
  Year 2009 Publication ACM Transactions on Asian Language Information Processing Abbreviated Journal TALIP  
  Volume 8 Issue 3 Pages 1-21  
  Keywords  
  Abstract In some Thai documents, a single text line of a printed document page may contain words of both Thai and Roman scripts. For the Optical Character Recognition (OCR) of such a document page it is better to identify, at first, Thai and Roman script portions and then to use individual OCR systems of the respective scripts on these identified portions. In this article, an SVM-based method is proposed for identification of word-wise printed Roman and Thai scripts from a single line of a document page. Here, at first, the document is segmented into lines and then lines are segmented into character groups (words). In the proposed scheme, we identify the script of a character group combining different character features obtained from structural shape, profile behavior, component overlapping information, topological properties, and water reservoir concept, etc. Based on the experiment on 10,000 data (words) we obtained 99.62% script identification accuracy from the proposed scheme.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1530-0226 ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ CPR2009f Serial 1869  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit  url
doi  isbn
openurl 
  Title Top-Down Color Attention for Object Recognition Type Conference Article
  Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 979 - 986  
  Keywords  
  Abstract Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.  
  Address Kyoto, Japan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1550-5499 ISBN 978-1-4244-4420-5 Medium  
  Area Expedition Conference ICCV  
  Notes CIC Approved no  
  Call Number CAT @ cat @ SWV2009 Serial 1196  
Permanent link to this record
 

 
Author Ivan Huerta; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez edit  doi
isbn  openurl
  Title Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios Type Conference Article
  Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 1499 - 1506  
  Keywords  
  Abstract Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.  
  Address Kyoto, Japan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1550-5499 ISBN 978-1-4244-4420-5 Medium  
  Area Expedition Conference ICCV  
  Notes Approved no  
  Call Number ISE @ ise @ HHM2009 Serial 1213  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
isbn  openurl
  Title Simultaneous 3D face pose and person-specific shape estimation from a single image using a holistic approach Type Conference Article
  Year 2009 Publication IEEE Workshop on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper presents a new approach for the simultaneous estimation of the 3D pose and specific shape of a previously unseen face from a single image. The face pose is not limited to a frontal view. We describe a holistic approach based on a deformable 3D model and a learned statistical facial texture model. Rather than obtaining a person-specific facial surface, the goal of this work is to compute person-specific 3D face shape in terms of a few control parameters that are used by many applications. The proposed holistic approach estimates the 3D pose parameters as well as the face shape control parameters by registering the warped texture to a statistical face texture, which is carried out by a stochastic and genetic optimizer. The proposed approach has several features that make it very attractive: (i) it uses a single grey-scale image, (ii) it is person-independent, (iii) it is featureless (no facial feature extraction is required), and (iv) its learning stage is easy. The proposed approach lends itself nicely to 3D face tracking and face gesture recognition in monocular videos. We describe extensive experiments that show the feasibility and robustness of the proposed approach.  
  Address Utah, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1550-5790 ISBN 978-1-4244-5497-6 Medium  
  Area Expedition Conference WACV  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2009b Serial 1256  
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; J. Mauri; Petia Radeva edit  doi
openurl 
  Title Intravascular Ultrasound Tissue Characterization with Sub-class Error-Correcting Output Codes Type Journal Article
  Year 2009 Publication Journal of Signal Processing Systems Abbreviated Journal  
  Volume 55 Issue 1-3 Pages 35–47  
  Keywords  
  Abstract Intravascular ultrasound (IVUS) represents a powerful imaging technique to explore coronary vessels and to study their morphology and histologic properties. In this paper, we characterize different tissues based on radial frequency, texture-based, and combined features. To deal with the classification of multiple tissues, we require the use of robust multi-class learning techniques. In this sense, error-correcting output codes (ECOC) show to robustly combine binary classifiers to solve multi-class problems. In this context, we propose a strategy to model multi-class classification tasks using sub-classes information in the ECOC framework. The new strategy splits the classes into different sub-sets according to the applied base classifier. Complex IVUS data sets containing overlapping data are learnt by splitting the original set of classes into sub-classes, and embedding the binary problems in a problem-dependent ECOC design. The method automatically characterizes different tissues, showing performance improvements over the state-of-the-art ECOC techniques for different base classifiers. Furthermore, the combination of RF and texture-based features also shows improvements over the state-of-the-art approaches.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1939-8018 ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ EPM2009 Serial 1258  
Permanent link to this record
 

 
Author D. Jayagopi; Bogdan Raducanu; D. Gatica-Perez edit  doi
isbn  openurl
  Title Characterizing conversational group dynamics using nonverbal behaviour Type Conference Article
  Year 2009 Publication 10th IEEE International Conference on Multimedia and Expo Abbreviated Journal  
  Volume Issue Pages 370–373  
  Keywords  
  Abstract This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.  
  Address New York, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 1945-7871 ISBN 978-1-4244-4290-4 Medium  
  Area Expedition Conference ICME  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ JRG2009 Serial 1217  
Permanent link to this record
 

 
Author Sergio Escalera; Eloi Puertas; Petia Radeva; Oriol Pujol edit  doi
isbn  openurl
  Title Multimodal laughter recognition in video conversations Type Conference Article
  Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal  
  Volume Issue Pages 110–115  
  Keywords  
  Abstract Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier.  
  Address Miami (USA)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 2160-7508 ISBN 978-1-4244-3994-2 Medium  
  Area Expedition Conference CVPR  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ EPR2009c Serial 1188  
Permanent link to this record
 

 
Author Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera edit  doi
isbn  openurl
  Title Dominance Detection in Face-to-face Conversations Type Conference Article
  Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal  
  Volume Issue Pages 97–102  
  Keywords  
  Abstract Dominance is referred to the level of influence a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on dominance detection from visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers opinion. Moreover, the considered indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analysis shows a high correlation and allows the categorization of dominant people in public discussion video sequences.  
  Address Miami, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 2160-7508 ISBN 978-1-4244-3994-2 Medium  
  Area Expedition Conference CVPR  
  Notes HuPBA; OR; MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ EMV2009 Serial 1227  
Permanent link to this record
 

 
Author Enric Marti; Debora Gil; Marc Vivet; Carme Julia edit  openurl
  Title Aprendizaje Basado en Proyectos en la asignatura de Gráficos por Computador en Ingeniería Informática. Balance de cuatro años de experiencia Type Miscellaneous
  Year 2009 Publication 15th Jornadas de Enseñanza Universitaria de la Informatica Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Barcelona, Spain Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume 1 Series Issue Edition  
  ISSN (up) 978-84-692-2758-9 ISBN Medium  
  Area Expedition Conference JENUI  
  Notes IAM;ADAS Approved no  
  Call Number IAM @ iam @ MGV2009 Serial 1596  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: