toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Eduard Vazquez; Ramon Baldrich edit  openurl
  Title Non-supervised goodness measure for image segmentation Type (up) Conference Article
  Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
  Volume Issue Pages 334–335  
  Keywords  
  Abstract  
  Address Gjovik, Norway  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CREATE  
  Notes CIC Approved no  
  Call Number CAT @ cat @ VaB2010 Serial 1299  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title 3D Scene Priors for Road Detection Type (up) Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 57–64  
  Keywords road detection  
  Abstract Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010a Serial 1302  
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa edit  doi
isbn  openurl
  Title Relaxing the 3L Algorithm for an Accurate Implicit Polynomial Fitting Type (up) Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3066-3072  
  Keywords  
  Abstract This paper presents a novel method to increase the accuracy of linear fitting of implicit polynomials. The proposed method is based on the 3L algorithm philosophy. The novelty lies on the relaxation of the additional constraints, already imposed by the 3L algorithm. Hence, the accuracy of the final solution is increased due to the proper adjustment of the expected values in the aforementioned additional constraints. Although iterative, the proposed approach solves the fitting problem within a linear framework, which is independent of the threshold tuning. Experimental results, both in 2D and 3D, showing improvements in the accuracy of the fitting are presented. Comparisons with both state of the art algorithms and a geometric based one (non-linear fitting), which is used as a ground truth, are provided.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RoS2010a Serial 1303  
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; David Geronimo; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Learning Appearance in Virtual Scenarios for Pedestrian Detection Type (up) Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 137–144  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title Learning Appearance in Virtual Scenarios for Pedestrian Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ MVG2010 Serial 1304  
Permanent link to this record
 

 
Author David Augusto Rojas; Joost Van de Weijer; Theo Gevers edit  isbn
openurl 
  Title Color Edge Saliency Boosting using Natural Image Statistics Type (up) Conference Article
  Year 2010 Publication 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science Abbreviated Journal  
  Volume Issue Pages 228–234  
  Keywords  
  Abstract State of the art methods for image matching, content-based retrieval and recognition use local features. Most of these still exploit only the luminance information for detection. The color saliency boosting algorithm has provided an efficient method to exploit the saliency of color edges based on information theory. However, during the design of this algorithm, some issues were not addressed in depth: (1) The method has ignored the underlying distribution of derivatives in natural images. (2) The dependence of information content in color-boosted edges on its spatial derivatives has not been quantitatively established. (3) To evaluate luminance and color contributions to saliency of edges, a parameter gradually balancing both contributions is required.
We introduce a novel algorithm, based on the principles of independent component analysis, which models the first order derivatives of color natural images by a generalized Gaussian distribution. Furthermore, using this probability model we show that for images with a Laplacian distribution, which is a particular case of generalized Gaussian distribution, the magnitudes of color-boosted edges reflect their corresponding information content. In order to evaluate the impact of color edge saliency in real world applications, we introduce an extension of the Laplacian-of-Gaussian detector to color, and the performance for image matching is evaluated. Our experiments show that our approach provides more discriminative regions in comparison with the original detector.
 
  Address Joensuu, Finland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 9781617388897 Medium  
  Area Expedition Conference CGIV/MCS  
  Notes ISE Approved no  
  Call Number CAT @ cat @ RWG2010 Serial 1306  
Permanent link to this record
 

 
Author Jaime Moreno; Xavier Otazu; Maria Vanrell edit  isbn
openurl 
  Title Local Perceptual Weighting in JPEG2000 for Color Images Type (up) Conference Article
  Year 2010 Publication 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science Abbreviated Journal  
  Volume Issue Pages 255–260  
  Keywords  
  Abstract The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM (Chromatic Induction Wavelet Model).  
  Address Joensuu, Finland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 9781617388897 Medium  
  Area Expedition Conference CGIV/MCS  
  Notes CIC Approved no  
  Call Number CAT @ cat @ MOV2010a Serial 1307  
Permanent link to this record
 

 
Author Jaime Moreno; Xavier Otazu; Maria Vanrell edit  openurl
  Title Contribution of CIWaM in JPEG2000 Quantization for Color Images Type (up) Conference Article
  Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
  Volume Issue Pages 132–136  
  Keywords  
  Abstract The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM(ChromaticInductionWaveletModel).  
  Address Gjovik (Norway)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CREATE  
  Notes CIC Approved no  
  Call Number CAT @ cat @ MOV2010b Serial 1308  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
isbn  openurl
  Title Single Snapshot 3D Head Pose Initialization for Tracking in Human Robot Interaction Scenario Type (up) Conference Article
  Year 2010 Publication 1st International Workshop on Computer Vision for Human-Robot Interaction Abbreviated Journal  
  Volume Issue Pages 32–39  
  Keywords 1st International Workshop on Computer Vision for Human-Robot Interaction, in conjunction with IEEE CVPR 2010  
  Abstract This paper presents an automatic 3D head pose initialization scheme for a real-time face tracker with application to human-robot interaction. It has two main contributions. First, we propose an automatic 3D head pose and person specific face shape estimation, based on a 3D deformable model. The proposed approach serves to initialize our realtime 3D face tracker. What makes this contribution very attractive is that the initialization step can cope with faces
under arbitrary pose, so it is not limited only to near-frontal views. Second, the previous framework is used to develop an application in which the orientation of an AIBO’s camera can be controlled through the imitation of user’s head pose.
In our scenario, this application is used to build panoramic images from overlapping snapshots. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
 
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference CVPRW  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2010a Serial 1309  
Permanent link to this record
 

 
Author Bogdan Raducanu; Fadi Dornaika edit  doi
isbn  openurl
  Title Dynamic Facial Expression Recognition Using Laplacian Eigenmaps-Based Manifold Learning Type (up) Conference Article
  Year 2010 Publication IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 156–161  
  Keywords  
  Abstract In this paper, we propose an integrated framework for tracking, modelling and recognition of facial expressions. The main contributions are: (i) a view- and texture independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker; (ii) the complexity of the non-linear facial expression space is modelled through a manifold, whose structure is learned using Laplacian Eigenmaps. The projected facial expressions are afterwards recognized based on Nearest Neighbor classifier; (iii) with the proposed approach, we developed an application for an AIBO robot, in which it mirrors the perceived facial expression.  
  Address Anchorage; AK; USA;  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1050-4729 ISBN 978-1-4244-5038-1 Medium  
  Area Expedition Conference ICRA  
  Notes OR; MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaD2010 Serial 1310  
Permanent link to this record
 

 
Author David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo edit  doi
isbn  openurl
  Title Fast and Robust Object Segmentation with the Integral Linear Classifier Type (up) Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1046–1053  
  Keywords  
  Abstract We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ ARL2010a Serial 1311  
Permanent link to this record
 

 
Author Michal Drozdzal; Laura Igual; Petia Radeva; Jordi Vitria; C. Malagelada; Fernando Azpiroz edit  doi
isbn  openurl
  Title Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy Type (up) Conference Article
  Year 2010 Publication IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis Abbreviated Journal  
  Volume Issue Pages 117–124  
  Keywords  
  Abstract Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference MMBIA  
  Notes OR;MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DIR2010 Serial 1316  
Permanent link to this record
 

 
Author Wenjuan Gong; Andrew Bagdanov; Xavier Roca; Jordi Gonzalez edit  doi
isbn  openurl
  Title Automatic Key Pose Selection for 3D Human Action Recognition Type (up) Conference Article
  Year 2010 Publication 6th International Conference on Articulated Motion and Deformable Objects Abbreviated Journal  
  Volume 6169 Issue Pages 290–299  
  Keywords  
  Abstract This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a “bag of poses” model that represents human actions as histograms of key-pose occurrences over the course of a video sequence. Actions are first represented as 3D poses using a sequence of 36 direction cosines corresponding to the angles 12 joints form with the world coordinate frame in an articulated human body model. These pose representations are then projected to three-dimensional, action-specific principal eigenspaces which we refer to as aSpaces. We introduce a method for key-pose selection based on a local-motion energy optimization criterion and we show that this method is more stable and more resistant to noisy data than other key-poses selection criteria for action recognition.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-14060-0 Medium  
  Area Expedition Conference AMDO  
  Notes ISE Approved no  
  Call Number DAG @ dag @ GBR2010 Serial 1317  
Permanent link to this record
 

 
Author Albert Gordo; Alicia Fornes; Ernest Valveny; Josep Llados edit  doi
isbn  openurl
  Title A Bag of Notes Approach to Writer Identification in Old Handwritten Music Scores Type (up) Conference Article
  Year 2010 Publication 9th IAPR International Workshop on Document Analysis Systems Abbreviated Journal  
  Volume Issue Pages 247–254  
  Keywords  
  Abstract Determining the authorship of a document, namely writer identification, can be an important source of information for document categorization. Contrary to text documents, the identification of the writer of graphical documents is still a challenge. In this paper we present a robust approach for writer identification in a particular kind of graphical documents, old music scores. This approach adapts the bag of visual terms method for coping with graphic documents. The identification is performed only using the graphical music notation. For this purpose, we generate a graphic vocabulary without recognizing any music symbols, and consequently, avoiding the difficulties in the recognition of hand-drawn symbols in old and degraded documents. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving very high identification rates.  
  Address Boston; USA;  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60558-773-8 Medium  
  Area Expedition Conference DAS  
  Notes DAG Approved no  
  Call Number DAG @ dag @ GFV2010 Serial 1320  
Permanent link to this record
 

 
Author Alicia Fornes; Josep Llados edit  url
doi  isbn
openurl 
  Title A Symbol-dependent Writer Identifcation Approach in Old Handwritten Music Scores Type (up) Conference Article
  Year 2010 Publication 12th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume Issue Pages 634 - 639  
  Keywords  
  Abstract Writer identification consists in determining the writer of a piece of handwriting from a set of writers. In this paper we introduce a symbol-dependent approach for identifying the writer of old music scores, which is based on two symbol recognition methods. The main idea is to use the Blurred Shape Model descriptor and a DTW-based method for detecting, recognizing and describing the music clefs and notes. The proposed approach has been evaluated in a database of old music scores, achieving very high writer identification rates.  
  Address Kolkata (India)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4244-8353-2 Medium  
  Area Expedition Conference ICFHR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ FoL2010 Serial 1321  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Ramon Baldrich; Maria Vanrell edit  isbn
openurl 
  Title Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset Type (up) Conference Article
  Year 2010 Publication 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science Abbreviated Journal  
  Volume Issue Pages 50–57  
  Keywords  
  Abstract The characterization of trichromatic cameras is usually done in terms of a device-independent color space, such as the CIE 1931 XYZ space. This is indeed convenient since it allows the testing of results against colorimetric measures. We have characterized our camera to represent human cone activation by mapping the camera sensor's (RGB) responses to human (LMS) through a polynomial transformation, which can be “customized” according to the types of scenes we want to represent. Here we present a method to test the accuracy of the camera measures and a study on how the choice of training reflectances for the polynomial may alter the results.  
  Address Joensuu, Finland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 9781617388897 Medium  
  Area Expedition Conference CGIV/MCS  
  Notes CIC Approved no  
  Call Number CAT @ cat @ PBV2010a Serial 1322  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: