|   | 
Details
   web
Records
Author Sergio Escalera; Alicia Fornes; Oriol Pujol; Josep Llados; Petia Radeva
Title Circular Blurred Shape Model for Multiclass Symbol Recognition Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Systems, Man and Cybernetics (Part B) (IEEE) Abbreviated Journal TSMCB
Volume 41 Issue 2 Pages 497-506
Keywords
Abstract In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1083-4419 ISBN Medium
Area Expedition Conference
Notes MILAB; DAG;HuPBA Approved no
Call Number Admin @ si @ EFP2011 Serial 1784
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers
Title Color Constancy Using Natural Image Statistics and Scene Semantics Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 33 Issue 4 Pages 687-698
Keywords
Abstract Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method that performs best for a specific image. To achieve selection and combining of color constancy algorithms, in this paper natural image statistics are used to identify the most important characteristics of color images. Then, based on these image characteristics, the proper color constancy algorithm (or best combination of algorithms) is selected for a specific image. To capture the image characteristics, the Weibull parameterization (e.g., grain size and contrast) is used. It is shown that the Weibull parameterization is related to the image attributes to which the used color constancy methods are sensitive. An MoG-classifier is used to learn the correlation and weighting between the Weibull-parameters and the image attributes (number of edges, amount of texture, and SNR). The output of the classifier is the selection of the best performing color constancy method for a certain image. Experimental results show a large improvement over state-of-the-art single algorithms. On a data set consisting of more than 11,000 images, an increase in color constancy performance up to 20 percent (median angular error) can be obtained compared to the best-performing single algorithm. Further, it is shown that for certain scene categories, one specific color constancy algorithm can be used instead of the classifier considering several algorithms.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ GiG2011 Serial 1724
Permanent link to this record
 

 
Author Koen E.A. van de Sande; Theo Gevers; Cees G.M. Snoek
Title Empowering Visual Categorization with the GPU Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Multimedia Abbreviated Journal TMM
Volume 13 Issue 1 Pages 60-70
Keywords
Abstract Visual categorization is important to manage large collections of digital images and video, where textual meta-data is often incomplete or simply unavailable. The bag-of-words model has become the most powerful method for visual categorization of images and video. Despite its high accuracy, a severe drawback of this model is its high computational cost. As the trend to increase computational power in newer CPU and GPU architectures is to increase their level of parallelism, exploiting this parallelism becomes an important direction to handle the computational cost of the bag-of-words approach. When optimizing a system based on the bag-of-words approach, the goal is to minimize the time it takes to process batches of images. Additionally, we also consider power usage as an evaluation metric. In this paper, we analyze the bag-of-words model for visual categorization in terms of computational cost and identify two major bottlenecks: the quantization step and the classification step. We address these two bottlenecks by proposing two efficient algorithms for quantization and classification by exploiting the GPU hardware and the CUDA parallel programming model. The algorithms are designed to (1) keep categorization accuracy intact, (2) decompose the problem and (3) give the same numerical results. In the experiments on large scale datasets it is shown that, by using a parallel implementation on the Geforce GTX260 GPU, classifying unseen images is 4.8 times faster than a quad-core CPU version on the Core i7 920, while giving the exact same numerical results. In addition, we show how the algorithms can be generalized to other applications, such as text retrieval and video retrieval. Moreover, when the obtained speedup is used to process extra video frames in a video retrieval benchmark, the accuracy of visual categorization is improved by 29%.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ SGS2011b Serial 1729
Permanent link to this record
 

 
Author Maria Salamo; Sergio Escalera
Title Increasing Retrieval Quality in Conversational Recommenders Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Knowledge and Data Engineering Abbreviated Journal TKDE
Volume 99 Issue Pages 1-1
Keywords
Abstract IF JCR CCIA 2.286 2009 24/103
JCR Impact Factor 2010: 1.851
A major task of research in conversational recommender systems is personalization. Critiquing is a common and powerful form of feedback, where a user can express her feature preferences by applying a series of directional critiques over the recommendations instead of providing specific preference values. Incremental Critiquing is a conversational recommender system that uses critiquing as a feedback to efficiently personalize products. The expectation is that in each cycle the system retrieves the products that best satisfy the user’s soft product preferences from a minimal information input. In this paper, we present a novel technique that increases retrieval quality based on a combination of compatibility and similarity scores. Under the hypothesis that a user learns Turing the recommendation process, we propose two novel exponential reinforcement learning approaches for compatibility that take into account both the instant at which the user makes a critique and the number of satisfied critiques. Moreover, we consider that the impact of features on the similarity differs according to the preferences manifested by the user. We propose a global weighting approach that uses a common weight for nearest cases in order to focus on groups of relevant products. We show that our methodology significantly improves recommendation efficiency in four data sets of different sizes in terms of session length in comparison with state-of-the-art approaches. Moreover, our recommender shows higher robustness against noisy user data when compared to classical approaches
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1041-4347 ISBN Medium
Area Expedition Conference
Notes MILAB; HuPBA Approved no
Call Number Admin @ si @ SaE2011 Serial 1713
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Antonio Lopez
Title Road Detection Based on Illuminant Invariance Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 12 Issue 1 Pages 184-193
Keywords road detection
Abstract By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number ADAS @ adas @ AlL2011 Serial 1456
Permanent link to this record
 

 
Author Fadi Dornaika; Jose Manuel Alvarez; Angel Sappa; Antonio Lopez
Title A New Framework for Stereo Sensor Pose through Road Segmentation and Registration Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 12 Issue 4 Pages 954-966
Keywords road detection
Abstract This paper proposes a new framework for real-time estimation of the onboard stereo head's position and orientation relative to the road surface, which is required for any advanced driver-assistance application. This framework can be used with all road types: highways, urban, etc. Unlike existing works that rely on feature extraction in either the image domain or 3-D space, we propose a framework that directly estimates the unknown parameters from the stream of stereo pairs' brightness. The proposed approach consists of two stages that are invoked for every stereo frame. The first stage segments the road region in one monocular view. The second stage estimates the camera pose using a featureless registration between the segmented monocular road region and the other view in the stereo pair. This paper has two main contributions. The first contribution combines a road segmentation algorithm with a registration technique to estimate the online stereo camera pose. The second contribution solves the registration using a featureless method, which is carried out using two different optimization techniques: 1) the differential evolution algorithm and 2) the Levenberg-Marquardt (LM) algorithm. We provide experiments and evaluations of performance. The results presented show the validity of our proposed framework.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1524-9050 ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ DAS2011; ADAS @ adas @ das2011a Serial 1833
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez
Title Video Alignment for Change Detection Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 20 Issue 7 Pages 1858-1869
Keywords video alignment
Abstract In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; IF Approved no
Call Number DPS 2011; ADAS @ adas @ dps2011 Serial 1705
Permanent link to this record
 

 
Author Ariel Amato; Mikhail Mozerov; Andrew Bagdanov; Jordi Gonzalez
Title Accurate Moving Cast Shadow Suppression Based on Local Color Constancy detection Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 20 Issue 10 Pages 2954 - 2966
Keywords
Abstract This paper describes a novel framework for detection and suppression of properly shadowed regions for most possible scenarios occurring in real video sequences. Our approach requires no prior knowledge about the scene, nor is it restricted to specific scene structures. Furthermore, the technique can detect both achromatic and chromatic shadows even in the presence of camouflage that occurs when foreground regions are very similar in color to shadowed regions. The method exploits local color constancy properties due to reflectance suppression over shadowed regions. To detect shadowed regions in a scene, the values of the background image are divided by values of the current frame in the RGB color space. We show how this luminance ratio can be used to identify segments with low gradient constancy, which in turn distinguish shadows from foreground. Experimental results on a collection of publicly available datasets illustrate the superior performance of our method compared with the most sophisticated, state-of-the-art shadow detection algorithms. These results show that our approach is robust and accurate over a broad range of shadow types and challenging video conditions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ AMB2011 Serial 1716
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer
Title Computational Color Constancy: Survey and Experiments Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 20 Issue 9 Pages 2475-2489
Keywords computational color constancy;computer vision application;gamut-based method;learning-based method;static method;colour vision;computer vision;image colour analysis;learning (artificial intelligence);lighting
Abstract Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-the- art methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available data sets. Finally, various freely available methods, of which some are considered to be state-of-the-art, are evaluated on two data sets.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ISE;CIC Approved no
Call Number Admin @ si @ GGW2011 Serial 1717
Permanent link to this record
 

 
Author Jose Seabra; Francesco Ciompi; Oriol Pujol; J. Mauri; Petia Radeva; Joao Sanchez
Title Rayleigh Mixture Model for Plaque Characterization in Intravascular Ultrasound Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Biomedical Engineering Abbreviated Journal TBME
Volume 58 Issue 5 Pages 1314-1324
Keywords
Abstract Vulnerable plaques are the major cause of carotid and coronary vascular problems, such as heart attack or stroke. A correct modeling of plaque echomorphology and composition can help the identification of such lesions. The Rayleigh distribution is widely used to describe (nearly) homogeneous areas in ultrasound images. Since plaques may contain tissues with heterogeneous regions, more complex distributions depending on multiple parameters are usually needed, such as Rice, K or Nakagami distributions. In such cases, the problem formulation becomes more complex, and the optimization procedure to estimate the plaque echomorphology is more difficult. Here, we propose to model the tissue echomorphology by means of a mixture of Rayleigh distributions, known as the Rayleigh mixture model (RMM). The problem formulation is still simple, but its ability to describe complex textural patterns is very powerful. In this paper, we present a method for the automatic estimation of the RMM mixture parameters by means of the expectation maximization algorithm, which aims at characterizing tissue echomorphology in ultrasound (US). The performance of the proposed model is evaluated with a database of in vitro intravascular US cases. We show that the mixture coefficients and Rayleigh parameters explicitly derived from the mixture model are able to accurately describe different plaque types and to significantly improve the characterization performance of an already existing methodology.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB;HuPBA Approved no
Call Number Admin @ si @ SCP2011 Serial 1712
Permanent link to this record
 

 
Author Eduard Vazquez; Ramon Baldrich; Joost Van de Weijer; Maria Vanrell
Title Describing Reflectances for Colour Segmentation Robust to Shadows, Highlights and Textures Type Journal Article
Year 2011 Publication (down) IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 33 Issue 5 Pages 917-930
Keywords
Abstract The segmentation of a single material reflectance is a challenging problem due to the considerable variation in image measurements caused by the geometry of the object, shadows, and specularities. The combination of these effects has been modeled by the dichromatic reflection model. However, the application of the model to real-world images is limited due to unknown acquisition parameters and compression artifacts. In this paper, we present a robust model for the shape of a single material reflectance in histogram space. The method is based on a multilocal creaseness analysis of the histogram which results in a set of ridges representing the material reflectances. The segmentation method derived from these ridges is robust to both shadow, shading and specularities, and texture in real-world images. We further complete the method by incorporating prior knowledge from image statistics, and incorporate spatial coherence by using multiscale color contrast information. Results obtained show that our method clearly outperforms state-of-the-art segmentation methods on a widely used segmentation benchmark, having as a main characteristic its excellent performance in the presence of shadows and highlights at low computational cost.
Address Los Alamitos; CA; USA;
Corporate Author Thesis
Publisher IEEE Computer Society Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ VBW2011 Serial 1715
Permanent link to this record
 

 
Author Mario Rojas; David Masip; Jordi Vitria
Title Predicting Dominance Judgements Automatically: A Machine Learning Approach. Type Conference Article
Year 2011 Publication (down) IEEE International Workshop on Social Behavior Analysis Abbreviated Journal
Volume Issue Pages 939-944
Keywords
Abstract The amount of multimodal devices that surround us is growing everyday. In this context, human interaction and communication have become a focus of attention and a hot topic of research. A crucial element in human relations is the evaluation of individuals with respect to facial traits, what is called a first impression. Studies based on appearance have suggested that personality can be expressed by appearance and the observer may use such information to form judgments. In the context of rapid facial evaluation, certain personality traits seem to have a more pronounced effect on the relations and perceptions inside groups. The perception of dominance has been shown to be an active part of social roles at different stages of life, and even play a part in mate selection. The aim of this paper is to study to what extent this information is learnable from the point of view of computer science. Specifically we intend to determine if judgments of dominance can be learned by machine learning techniques. We implement two different descriptors in order to assess this. The first is the histogram of oriented gradients (HOG), and the second is a probabilistic appearance descriptor based on the frequencies of grouped binary tests. State of the art classification rules validate the performance of both descriptors, with respect to the prediction task. Experimental results show that machine learning techniques can predict judgments of dominance rather accurately (accuracies up to 90%) and that the HOG descriptor may characterize appropriately the information necessary for such task.
Address Santa Barbara, CA
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4244-9140-7 Medium
Area Expedition Conference SBA
Notes OR;MV Approved no
Call Number Admin @ si @ RMV2011b Serial 1760
Permanent link to this record
 

 
Author Jaime Moreno; Xavier Otazu
Title Image compression algorithm based on Hilbert scanning of embedded quadTrees: an introduction of the Hi-SET coder Type Conference Article
Year 2011 Publication (down) IEEE International Conference on Multimedia and Expo Abbreviated Journal
Volume Issue Pages 1-6
Keywords
Abstract In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels. The implementation of the proposed coder is developed for gray-scale and color image compression. Hi-SET compressed images are, on average, 6.20dB better than the ones obtained by other compression techniques based on the Hilbert scanning. Moreover, Hi-SET improves the image quality in 1.39dB and 1.00dB in gray-scale and color compression, respectively, when compared with JPEG2000 coder.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1945-7871 ISBN 978-1-61284-348-3 Medium
Area Expedition Conference ICME
Notes CIC Approved no
Call Number Admin @ si @ MoO2011a Serial 2176
Permanent link to this record
 

 
Author Patricia Marquez; Debora Gil; Aura Hernandez-Sabate
Title A Confidence Measure for Assessing Optical Flow Accuracy in the Absence of Ground Truth Type Conference Article
Year 2011 Publication (down) IEEE International Conference on Computer Vision – Workshops Abbreviated Journal
Volume Issue Pages 2042-2049
Keywords IEEE International Conference on Computer Vision – Workshops
Abstract Optical flow is a valuable tool for motion analysis in autonomous navigation systems. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in real world sequences. This paper introduces a measure of optical flow accuracy for Lucas-Kanade based flows in terms of the numerical stability of the data-term. We call this measure optical flow condition number. A statistical analysis over ground-truth data show a good statistical correlation between the condition number and optical flow error. Experiments on driving sequences illustrate its potential for autonomous navigation systems.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Barcelona (Spain) Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes IAM; ADAS Approved no
Call Number IAM @ iam @ MGH2011 Serial 1682
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga
Title Saliency Estimation Using a Non-Parametric Low-Level Vision Model Type Conference Article
Year 2011 Publication (down) IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 433-440
Keywords Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms
Abstract Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.
Address Colorado Springs
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium
Area Expedition Conference CVPR
Notes CIC Approved no
Call Number Admin @ si @ MVO2011 Serial 1757
Permanent link to this record