toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Marçal Rusiñol; Josep Llados edit  url
doi  isbn
openurl 
  Title Logo Spotting by a Bag-of-words Approach for Document Categorization Type Conference Article
  Year 2009 Publication 10th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 111–115  
  Keywords  
  Abstract In this paper we present a method for document categorization which processes incoming document images such as invoices or receipts. The categorization of these document images is done in terms of the presence of a certain graphical logo detected without segmentation. The graphical logos are described by a set of local features and the categorization of the documents is performed by the use of a bag-of-words model. Spatial coherence rules are added to reinforce the correct category hypothesis, aiming also to spot the logo inside the document image. Experiments which demonstrate the effectiveness of this system on a large set of real data are presented.  
  Address Barcelona; Spain  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN 1520-5363 ISBN 978-1-4244-4500-4 Medium  
  Area Expedition Conference ICDAR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ RuL2009b Serial 1179  
Permanent link to this record
 

 
Author Sergio Escalera; Xavier Baro; Jordi Vitria; Petia Radeva edit  doi
isbn  openurl
  Title Text Detection in Urban Scenes (video sample) Type Conference Article
  Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 202 Issue Pages 35–44  
  Keywords  
  Abstract Abstract. Text detection in urban scenes is a hard task due to the high variability of text appearance: different text fonts, changes in the point of view, or partial occlusion are just a few problems. Text detection can be specially suited for georeferencing business, navigation, tourist assistance, or to help visual impaired people. In this paper, we propose a general methodology to deal with the problem of text detection in outdoor scenes. The method is based on learning spatial information of gradient based features and Census Transform images using a cascade of classifiers. The method is applied in the context of Mobile Mapping systems, where a mobile vehicle captures urban image sequences. Moreover, a cover data set is presented and tested with the new methodology. The results show high accuracy when detecting multi-linear text regions with high variability of appearance, at same time that it preserves a low false alarm rate compared to classical approaches  
  Address Cardona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-061-2 Medium  
  Area Expedition Conference CCIA  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ EBV2009 Serial 1181  
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria edit  doi
isbn  openurl
  Title Measuring Interest of Human Dyadic Interactions Type Conference Article
  Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 202 Issue Pages 45-54  
  Keywords  
  Abstract In this paper, we argue that only using behavioural motion information, we are able to predict the interest of observers when looking at face-to-face interactions. We propose a set of movement-related features from body, face, and mouth activity in order to define a set of higher level interaction features, such as stress, activity, speaking engagement, and corporal engagement. Error-Correcting Output Codes framework with an Adaboost base classifier is used to learn to rank the perceived observer's interest in face-to-face interactions. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers. In particular, the learning system shows that stress features have a high predictive power for ranking interest of observers when looking at of face-to-face interactions.  
  Address Cardona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-061-2 Medium  
  Area Expedition Conference CCIA  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ EPR2009b Serial 1182  
Permanent link to this record
 

 
Author Xavier Baro; Sergio Escalera; Petia Radeva; Jordi Vitria edit  doi
isbn  openurl
  Title Generic Object Recognition in Urban Image Databases Type Conference Article
  Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 202 Issue Pages 27-34  
  Keywords  
  Abstract In this paper we propose the construction of a visual content layer which describes the visual appearance of geographic locations in a city. We captured, by means of a Mobile Mapping system, a huge set of georeferenced images (>500K) which cover the whole city of Barcelona. For each image, hundreds of region descriptions are computed off-line and described as a hash code. All this information is extracted without an object of reference, which allows to search for any type of objects using their visual appearance. A new Visual Content layer is built over Google Maps, allowing the object recognition information to be organized and fused with other content, like satellite images, street maps, and business locations.  
  Address Cardona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-061-2 Medium  
  Area Expedition Conference CCIA  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ VER2009 Serial 1183  
Permanent link to this record
 

 
Author Sergio Escalera; Alicia Fornes; Oriol Pujol; Alberto Escudero; Petia Radeva edit  url
isbn  openurl
  Title Circular Blurred Shape Model for Symbol Spotting in Documents Type Conference Article
  Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 1985-1988  
  Keywords  
  Abstract Symbol spotting problem requires feature extraction strategies able to generalize from training samples and to localize the target object while discarding most part of the image. In the case of document analysis, symbol spotting techniques have to deal with a high variability of symbols' appearance. In this paper, we propose the Circular Blurred Shape Model descriptor. Feature extraction is performed capturing the spatial arrangement of significant object characteristics in a correlogram structure. Shape information from objects is shared among correlogram regions, being tolerant to the irregular deformations. Descriptors are learnt using a cascade of classifiers and Abadoost as the base classifier. Finally, symbol spotting is performed by means of a windowing strategy using the learnt cascade over plan and old musical score documents. Spotting and multi-class categorization results show better performance comparing with the state-of-the-art descriptors.  
  Address Cairo, Egypt  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4244-5653-6 Medium  
  Area Expedition Conference ICIP  
  Notes MILAB;HuPBA;DAG Approved no  
  Call Number BCNPCL @ bcnpcl @ EFP2009b Serial 1184  
Permanent link to this record
 

 
Author Sergio Escalera; Eloi Puertas; Petia Radeva; Oriol Pujol edit  doi
isbn  openurl
  Title Multimodal laughter recognition in video conversations Type Conference Article
  Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal  
  Volume Issue Pages 110–115  
  Keywords  
  Abstract Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier.  
  Address Miami (USA)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-3994-2 Medium  
  Area Expedition Conference CVPR  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ EPR2009c Serial 1188  
Permanent link to this record
 

 
Author Xavier Baro; Sergio Escalera; Petia Radeva; Jordi Vitria edit  url
isbn  openurl
  Title Visual Content Layer for Scalable Recognition in Urban Image Databases, Internet Multimedia Search and Mining Type Conference Article
  Year 2009 Publication 10th IEEE International Conference on Multimedia and Expo Abbreviated Journal  
  Volume Issue Pages 1616–1619  
  Keywords  
  Abstract Rich online map interaction represents a useful tool to get multimedia information related to physical places. With this type of systems, users can automatically compute the optimal route for a trip or to look for entertainment places or hotels near their actual position. Standard maps are defined as a fusion of layers, where each one contains specific data such height, streets, or a particular business location. In this paper we propose the construction of a visual content layer which describes the visual appearance of geographic locations in a city. We captured, by means of a Mobile Mapping system, a huge set of georeferenced images (> 500K) which cover the whole city of Barcelona. For each image, hundreds of region descriptions are computed off-line and described as a hash code. This allows an efficient and scalable way of accessing maps by visual content.  
  Address New York (USA)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4244-4291-1 Medium  
  Area Expedition Conference ICME  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ BER2009 Serial 1189  
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; Petia Radeva edit  doi
isbn  openurl
  Title Recoding Error-Correcting Output Codes Type Conference Article
  Year 2009 Publication 8th International Workshop of Multiple Classifier Systems Abbreviated Journal  
  Volume 5519 Issue Pages 11–21  
  Keywords  
  Abstract One of the most widely applied techniques to deal with multi- class categorization problems is the pairwise voting procedure. Recently, this classical approach has been embedded in the Error-Correcting Output Codes framework (ECOC). This framework is based on a coding step, where a set of binary problems are learnt and coded in a matrix, and a decoding step, where a new sample is tested and classified according to a comparison with the positions of the coded matrix. In this paper, we present a novel approach to redefine without retraining, in a problem-dependent way, the one-versus-one coding matrix so that the new coded information increases the generalization capability of the system. Moreover, the final classification can be tuned with the inclusion of a weighting matrix in the decoding step. The approach has been validated over several UCI Machine Learning repository data sets and two real multi-class problems: traffic sign and face categorization. The results show that performance improvements are obtained when comparing the new approach to one of the best ECOC designs (one-versus-one). Furthermore, the novel methodology obtains at least the same performance than the one-versus-one ECOC design.  
  Address Reykjavik (Iceland)  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-02325-5 Medium  
  Area Expedition Conference MCS  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ EPR2009d Serial 1190  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit  url
doi  isbn
openurl 
  Title Top-Down Color Attention for Object Recognition Type Conference Article
  Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 979 - 986  
  Keywords  
  Abstract Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.  
  Address Kyoto, Japan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN 1550-5499 ISBN 978-1-4244-4420-5 Medium  
  Area Expedition Conference ICCV  
  Notes CIC Approved no  
  Call Number CAT @ cat @ SWV2009 Serial 1196  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit  url
doi  isbn
openurl 
  Title Physics-based Edge Evaluation for Improved Color Constancy Type Conference Article
  Year 2009 Publication 22nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 581 – 588  
  Keywords  
  Abstract Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation.  
  Address Miami, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-3992-8 Medium  
  Area Expedition Conference CVPR  
  Notes CAT;ISE Approved no  
  Call Number CAT @ cat @ GGW2009 Serial 1197  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Ferran Diego; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Automatic Ground-truthing using video registration for on-board detection algorithms Type Conference Article
  Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 4389 - 4392  
  Keywords  
  Abstract Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.  
  Address Cairo, Egypt  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN 1522-4880 ISBN 978-1-4244-5653-6 Medium  
  Area Expedition Conference ICIP  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ ADS2009 Serial 1201  
Permanent link to this record
 

 
Author Francesco Ciompi; Oriol Pujol; Oriol Rodriguez-Leor; Angel Serrano; J. Mauri; Petia Radeva edit  doi
isbn  openurl
  Title On in-vitro and in-vivo IVUS data fusion Type Conference Article
  Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 202 Issue Pages 147-156  
  Keywords  
  Abstract The design and the validation of an automatic plaque characterization technique based on Intravascular Ultrasound (IVUS) usually requires a data ground-truth. The histological analysis of post-mortem coronary arteries is commonly assumed as the state-of-the-art process for the extraction of a reliable data-set of atherosclerotic plaques. Unfortunately, the amount of data provided by this technique is usually few, due to the difficulties in collecting post-mortem cases and phenomena of tissue spoiling during histological analysis. In this paper we tackle the process of fusing in-vivo and in-vitro IVUS data starting with the analysis of recently proposed approaches for the creation of an enhanced IVUS data-set; furthermore, we propose a new approach, named pLDS, based on semi-supervised learning with a data selection criterion. The enhanced data-set obtained by each one of the analyzed approaches is used to train a classifier for tissue characterization purposes. Finally, the discriminative power of each classifier is quantitatively assessed and compared by classifying a data-set of validated in-vitro IVUS data.  
  Address Cardona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-061-2 Medium  
  Area Expedition Conference CCIA  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ CPR2009d Serial 1204  
Permanent link to this record
 

 
Author Nicola Bellotto; Eric Sommerlade; Ben Benfold; Charles Bibby; I. Reid; Daniel Roth; Luc Van Gool; Carles Fernandez; Jordi Gonzalez edit   pdf
doi  openurl
  Title A Distributed Camera System for Multi-Resolution Surveillance Type Conference Article
  Year 2009 Publication 3rd ACM/IEEE International Conference on Distributed Smart Cameras Abbreviated Journal  
  Volume Issue Pages  
  Keywords 10.1109/ICDSC.2009.5289413  
  Abstract We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.  
  Address Como, Italy  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDSC  
  Notes Approved no  
  Call Number ISE @ ise @ BSB2009 Serial 1205  
Permanent link to this record
 

 
Author Mikhail Mozerov; Ariel Amato; Xavier Roca edit  isbn
openurl 
  Title Occlusion Handling in Trinocular Stereo using Composite Disparity Space Image Type Conference Article
  Year 2009 Publication 19th International Conference on Computer Graphics and Vision Abbreviated Journal  
  Volume Issue Pages 69–73  
  Keywords  
  Abstract In this paper we propose a method that smartly improves occlusion handling in stereo matching using trinocular stereo. The main idea is based on the assumption that any occluded region in a matched stereo pair (middle-left images) in general is not occluded in the opposite matched pair (middle-right images). Then two disparity space images (DSI) can be merged in one composite DSI. The proposed integration differs from the known approach that uses a cumulative cost. A dense disparity map is obtained with a global optimization algorithm using the proposed composite DSI. The experimental results are evaluated on the Middlebury data set, showing high performance of the proposed algorithm especially in the occluded regions. One of the top positions in the rank of the Middlebury website confirms the performance of our method to be competitive with the best stereo matching.  
  Address Moscow (Russia)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-5-317-02975-3 Medium  
  Area Expedition Conference GRAPHICON  
  Notes ISE Approved no  
  Call Number ISE @ ise @ MAR2009b Serial 1207  
Permanent link to this record
 

 
Author Ivan Huerta; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez edit  doi
isbn  openurl
  Title Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios Type Conference Article
  Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 1499 - 1506  
  Keywords  
  Abstract Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.  
  Address Kyoto, Japan  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN 1550-5499 ISBN 978-1-4244-4420-5 Medium  
  Area Expedition Conference ICCV  
  Notes Approved no  
  Call Number ISE @ ise @ HHM2009 Serial 1213  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: