|   | 
Details
   web
Records Links (up)
Author Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell edit   pdf
url  doi
isbn  openurl
Title Names and Shades of Color for Intrinsic Image Estimation Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 278-285  
Keywords  
Abstract In the last years, intrinsic image decomposition has gained attention. Most of the state-of-the-art methods are based on the assumption that reflectance changes come along with strong image edges. Recently, user intervention in the recovery problem has proved to be a remarkable source of improvement. In this paper, we propose a novel approach that aims to overcome the shortcomings of pure edge-based methods by introducing strong surface descriptors, such as the color-name descriptor which introduces high-level considerations resembling top-down intervention. We also use a second surface descriptor, termed color-shade, which allows us to include physical considerations derived from the image formation model capturing gradual color surface variations. Both color cues are combined by means of a Markov Random Field. The method is quantitatively tested on the MIT ground truth dataset using different error metrics, achieving state-of-the-art performance.  
Address Providence, Rhode Island  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ SPB2012 Serial 2026  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez edit   pdf
url  doi
isbn  openurl
Title Color Attributes for Object Detection Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 3306-3313  
Keywords pedestrian detection  
Abstract State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
 
Address Providence; Rhode Island; USA;  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference CVPR  
Notes ADAS; CIC; Approved no  
Call Number Admin @ si @ KRW2012 Serial 1935  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit   pdf
url  doi
openurl 
Title Improving Color Constancy by Photometric Edge Weighting Type Journal Article
Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
Volume 34 Issue 5 Pages 918-929  
Keywords  
Abstract : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.  
Address Los Alamitos; CA; USA;  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0162-8828 ISBN Medium  
Area Expedition Conference  
Notes CIC;ISE Approved no  
Call Number Admin @ si @ GGW2012 Serial 1850  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit   pdf
url  doi
openurl 
Title Computational Color Constancy: Survey and Experiments Type Journal Article
Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
Volume 20 Issue 9 Pages 2475-2489  
Keywords computational color constancy;computer vision application;gamut-based method;learning-based method;static method;colour vision;computer vision;image colour analysis;learning (artificial intelligence);lighting  
Abstract Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-the- art methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available data sets. Finally, various freely available methods, of which some are considered to be state-of-the-art, are evaluated on two data sets.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1057-7149 ISBN Medium  
Area Expedition Conference  
Notes ISE;CIC Approved no  
Call Number Admin @ si @ GGW2011 Serial 1717  
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga edit   pdf
url  doi
isbn  openurl
Title Saliency Estimation Using a Non-Parametric Low-Level Vision Model Type Conference Article
Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 433-440  
Keywords Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms  
Abstract Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.  
Address Colorado Springs  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium  
Area Expedition Conference CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ MVO2011 Serial 1757  
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer edit   pdf
url  doi
isbn  openurl
Title Object Recoloring Based on Intrinsic Image Estimation Type Conference Article
Year 2011 Publication 13th IEEE International Conference in Computer Vision Abbreviated Journal  
Volume Issue Pages 327 - 334  
Keywords  
Abstract Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods.  
Address Barcelona  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium  
Area Expedition Conference ICCV  
Notes CIC Approved no  
Call Number Admin @ si @ BeW2011 Serial 1781  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit   pdf
url  doi
openurl 
Title Modulating Shape Features by Color Attention for Object Recognition Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
Volume 98 Issue 1 Pages 49-64  
Keywords  
Abstract Bag-of-words based image representation is a successful approach for object recognition. Generally, the subsequent stages of the process: feature detection,feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, it was found that the combination of different image cues, such as shape and color, often obtains below expected results. This paper presents a novel method for recognizing object categories when using ultiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category specific color attention. Color is used to compute bottom up and top-down attention maps. Subsequently, these color attention maps are used to modulate the weights of the shape features. In regions with higher attention shape features are given more weight than in regions with low attention. We compare our approach with existing methods that combine color and shape cues on five data sets containing varied importance of both cues, namely, Soccer (color predominance), Flower (color and hape parity), PASCAL VOC 2007 and 2009 (shape predominance) and Caltech-101 (color co-interference). The experiments clearly demonstrate that in all five data sets our proposed framework significantly outperforms existing methods for combining color and shape information.  
Address  
Corporate Author Thesis  
Publisher Springer Netherlands Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0920-5691 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ KWV2012 Serial 1864  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Robert Benavente; Maria Vanrell; Ramon Baldrich edit  url
openurl 
Title Psychophysical measurements to model inter-colour regions of colour-naming space Type Journal Article
Year 2009 Publication Journal of Imaging Science and Technology Abbreviated Journal  
Volume 53 Issue 3 Pages 031106 (8 pages)  
Keywords image processing; Analysis  
Abstract JCR Impact Factor 2009: 0.391
In this paper, we present a fuzzy-set of parametric functions which segment the CIE lab space into eleven regions which correspond to the group of common universal categories present in all evolved languages as identified by anthropologists and linguists. The set of functions is intended to model a color-name assignment task by humans and differs from other models in its emphasis on the inter-color boundary regions, which were explicitly measured by means of a psychophysics experiment. In our particular implementation, the CIE lab space was segmented into eleven color categories using a Triple Sigmoid as the fuzzy sets basis, whose parameters are included in this paper. The model’s parameters were adjusted according to the psychophysical results of a yes/no discrimination paradigm where observers had to choose (English) names for isoluminant colors belonging to regions in-between neighboring categories. These colors were presented on a calibrated CRT monitor (14-bit x 3 precision). The experimental results show that inter- color boundary regions are much less defined than expected and color samples other than those near the most representatives are needed to define the position and shape of boundaries between categories. The extended set of model parameters is given as a table.
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number CAT @ cat @ PBV2009 Serial 1157  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell edit   pdf
url  openurl
Title Portmanteau Vocabularies for Multi-Cue Image Representation Type Conference Article
Year 2011 Publication 25th Annual Conference on Neural Information Processing Systems Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference NIPS  
Notes CIC Approved no  
Call Number Admin @ si @ KWB2011 Serial 1865  
Permanent link to this record
 

 
Author Olivier Penacchio edit   pdf
url  doi
openurl 
Title Mixed Hodge Structures and Equivariant Sheaves on the Projective Plane Type Journal Article
Year 2011 Publication Mathematische Nachrichten Abbreviated Journal MN  
Volume 284 Issue 4 Pages 526-542  
Keywords Mixed Hodge structures, equivariant sheaves, MSC (2010) Primary: 14C30, Secondary: 14F05, 14M25  
Abstract We describe an equivalence of categories between the category of mixed Hodge structures and a category of equivariant vector bundles on a toric model of the complex projective plane which verify some semistability condition. We then apply this correspondence to define an invariant which generalizes the notion of R-split mixed Hodge structure and give calculations for the first group of cohomology of possibly non smooth or non-complete curves of genus 0 and 1. Finally, we describe some extension groups of mixed Hodge structures in terms of equivariant extensions of coherent sheaves. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim  
Address  
Corporate Author Thesis  
Publisher WILEY-VCH Verlag Place of Publication Editor R. Mennicken  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1522-2616 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Pen2011 Serial 1721  
Permanent link to this record
 

 
Author Naila Murray; Luca Marchesotti; Florent Perronnin edit   pdf
url  doi
isbn  openurl
Title Learning to Rank Images using Semantic and Aesthetic Labels Type Conference Article
Year 2012 Publication 23rd British Machine Vision Conference Abbreviated Journal  
Volume Issue Pages 110.1-110.10  
Keywords  
Abstract Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics/visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recently-published large-scale dataset and we show that the second family of techniques significantly outperforms the first one.  
Address Guildford, London  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 1-901725-46-4 Medium  
Area Expedition Conference BMVC  
Notes CIC Approved no  
Call Number Admin @ si @ MMP2012b Serial 2027  
Permanent link to this record
 

 
Author Xavier Boix; Josep M. Gonfaus; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez edit   pdf
url  doi
openurl 
Title Harmony Potentials: Fusing Global and Local Scale for Semantic Image Segmentation Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
Volume 96 Issue 1 Pages 83-102  
Keywords  
Abstract The Hierarchical Conditional Random Field(HCRF) model have been successfully applied to a number of image labeling problems, including image segmentation. However, existing HCRF models of image segmentation do not allow multiple classes to be assigned to a single region, which limits their ability to incorporate contextual information across multiple scales.
At higher scales in the image, this representation yields an oversimpli ed model since multiple classes can be reasonably expected to appear within large regions. This simpli ed model particularly limits the impact of information at higher scales. Since class-label information at these scales is usually more reliable than at lower, noisier scales, neglecting this information is undesirable. To
address these issues, we propose a new consistency potential for image labeling problems, which we call the harmony potential. It can encode any possible combi-
nation of labels, penalizing only unlikely combinations of classes. We also propose an e ective sampling strategy over this expanded label set that renders tractable the underlying optimization problem. Our approach obtains state-of-the-art results on two challenging, standard benchmark datasets for semantic image segmentation: PASCAL VOC 2010, and MSRC-21.
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0920-5691 ISBN Medium  
Area Expedition Conference  
Notes ISE;CIC;ADAS Approved no  
Call Number Admin @ si @ BGW2012 Serial 1718  
Permanent link to this record
 

 
Author Graham D. Finlayson; Javier Vazquez; Sabine Süsstrunk; Maria Vanrell edit   pdf
url  doi
openurl 
Title Spectral sharpening by spherical sampling Type Journal Article
Year 2012 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
Volume 29 Issue 7 Pages 1199-1210  
Keywords  
Abstract There are many works in color that assume illumination change can be modeled by multiplying sensor responses by individual scaling factors. The early research in this area is sometimes grouped under the heading “von Kries adaptation”: the scaling factors are applied to the cone responses. In more recent studies, both in psychophysics and in computational analysis, it has been proposed that scaling factors should be applied to linear combinations of the cones that have narrower support: they should be applied to the so-called “sharp sensors.” In this paper, we generalize the computational approach to spectral sharpening in three important ways. First, we introduce spherical sampling as a tool that allows us to enumerate in a principled way all linear combinations of the cones. This allows us to, second, find the optimal sharp sensors that minimize a variety of error measures including CIE Delta E (previous work on spectral sharpening minimized RMS) and color ratio stability. Lastly, we extend the spherical sampling paradigm to the multispectral case. Here the objective is to model the interaction of light and surface in terms of color signal spectra. Spherical sampling is shown to improve on the state of the art.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1084-7529 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ FVS2012 Serial 2000  
Permanent link to this record
 

 
Author Xavier Otazu; Olivier Penacchio; Laura Dempere-Marco edit   pdf
url  openurl
Title An investigation into plausible neural mechanisms related to the the CIWaM computational model for brightness induction Type Conference Article
Year 2012 Publication 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. From a purely computational perspective, we built a low-level computational model (CIWaM) of early sensory processing based on multi-resolution wavelets with the aim of replicating brightness and colour (Otazu et al., 2010, Journal of Vision, 10(12):5) induction effects. Furthermore, we successfully used the CIWaM architecture to define a computational saliency model (Murray et al, 2011, CVPR, 433-440; Vanrell et al, submitted to AVA/BMVA'12). From a biological perspective, neurophysiological evidence suggests that perceived brightness information may be explicitly represented in V1. In this work we investigate possible neural mechanisms that offer a plausible explanation for such effects. To this end, we consider the model by Z.Li (Li, 1999, Network:Comput. Neural Syst., 10, 187-212) which is based on biological data and focuses on the part of V1 responsible for contextual influences, namely, layer 2-3 pyramidal cells, interneurons, and horizontal intracortical connections. This model has proven to account for phenomena such as visual saliency, which share with brightness induction the relevant effect of contextual influences (the ones modelled by CIWaM). In the proposed model, the input to the network is derived from a complete multiscale and multiorientation wavelet decomposition taken from the computational model (CIWaM).
This model successfully accounts for well known pyschophysical effects (among them: the White's and modied White's effects, the Todorovic, Chevreul, achromatic ring patterns, and grating induction effects) for static contexts and also for brigthness induction in dynamic contexts defined by modulating the luminance of surrounding areas. From a methodological point of view, we conclude that the results obtained by the computational model (CIWaM) are compatible with the ones obtained by the neurodynamical model proposed here.
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference AV A  
Notes CIC Approved no  
Call Number Admin @ si @ OPD2012a Serial 2132  
Permanent link to this record
 

 
Author Naila Murray; Luca Marchesotti; Florent Perronnin edit   pdf
url  doi
isbn  openurl
Title AVA: A Large-Scale Database for Aesthetic Visual Analysis Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 2408-2415  
Keywords  
Abstract With the ever-expanding volume of visual content available, the ability to organize and navigate such content by aesthetic preference is becoming increasingly important. While still in its nascent stage, research into computational models of aesthetic preference already shows great potential. However, to advance research, realistic, diverse and challenging databases are needed. To this end, we introduce a new large-scale database for conducting Aesthetic Visual Analysis: AVA. It contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style. We show the advantages of AVA with respect to existing databases in terms of scale, diversity, and heterogeneity of annotations. We then describe several key insights into aesthetic preference afforded by AVA. Finally, we demonstrate, through three applications, how the large scale of AVA can be leveraged to improve performance on existing preference tasks  
Address Providence, Rhode Islan  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ MMP2012a Serial 2025  
Permanent link to this record
 

 
Author Eduard Vazquez; Ramon Baldrich; Joost Van de Weijer; Maria Vanrell edit   pdf
url  doi
openurl 
Title Describing Reflectances for Colour Segmentation Robust to Shadows, Highlights and Textures Type Journal Article
Year 2011 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
Volume 33 Issue 5 Pages 917-930  
Keywords  
Abstract The segmentation of a single material reflectance is a challenging problem due to the considerable variation in image measurements caused by the geometry of the object, shadows, and specularities. The combination of these effects has been modeled by the dichromatic reflection model. However, the application of the model to real-world images is limited due to unknown acquisition parameters and compression artifacts. In this paper, we present a robust model for the shape of a single material reflectance in histogram space. The method is based on a multilocal creaseness analysis of the histogram which results in a set of ridges representing the material reflectances. The segmentation method derived from these ridges is robust to both shadow, shading and specularities, and texture in real-world images. We further complete the method by incorporating prior knowledge from image statistics, and incorporate spatial coherence by using multiscale color contrast information. Results obtained show that our method clearly outperforms state-of-the-art segmentation methods on a widely used segmentation benchmark, having as a main characteristic its excellent performance in the presence of shadows and highlights at low computational cost.  
Address Los Alamitos; CA; USA;  
Corporate Author Thesis  
Publisher IEEE Computer Society Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0162-8828 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ VBW2011 Serial 1715  
Permanent link to this record
 

 
Author Jordi Roca; Maria Vanrell; C. Alejandro Parraga edit  url
isbn  openurl
Title What is constant in colour constancy? Type Conference Article
Year 2012 Publication 6th European Conference on Colour in Graphics, Imaging and Vision Abbreviated Journal  
Volume Issue Pages 337-343  
Keywords  
Abstract Color constancy refers to the ability of the human visual system to stabilize
the color appearance of surfaces under an illuminant change. In this work we studied how the interrelations among nine colors are perceived under illuminant changes, particularly whether they remain stable across 10 different conditions (5 illuminants and 2 backgrounds). To do so we have used a paradigm that measures several colors under an immersive state of adaptation. From our measures we defined a perceptual structure descriptor that is up to 87% stable over all conditions, suggesting that color category features could be used to predict color constancy. This is in agreement with previous results on the stability of border categories [1,2] and with computational color constancy
algorithms [3] for estimating the scene illuminant.
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 9781622767014 Medium  
Area Expedition Conference CGIV  
Notes CIC Approved no  
Call Number RVP2012 Serial 2189  
Permanent link to this record
 

 
Author Xavier Otazu; C. Alejandro Parraga; Maria Vanrell edit  url
doi  openurl
Title Towards a unified chromatic inducction model Type Journal Article
Year 2010 Publication Journal of Vision Abbreviated Journal VSS  
Volume 10 Issue 12:5 Pages 1-24  
Keywords Visual system; Color induction; Wavelet transform  
Abstract In a previous work (X. Otazu, M. Vanrell, & C. A. Párraga, 2008b), we showed how several brightness induction effects can be predicted using a simple multiresolution wavelet model (BIWaM). Here we present a new model for chromatic induction processes (termed Chromatic Induction Wavelet Model or CIWaM), which is also implemented on a multiresolution framework and based on similar assumptions related to the spatial frequency and the contrast surround energy of the stimulus. The CIWaM can be interpreted as a very simple extension of the BIWaM to the chromatic channels, which in our case are defined in the MacLeod-Boynton (lsY) color space. This new model allows us to unify both chromatic assimilation and chromatic contrast effects in a single mathematical formulation. The predictions of the CIWaM were tested by means of several color and brightness induction experiments, which showed an acceptable agreement between model predictions and psychophysical data.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number CAT @ cat @ OPV2010 Serial 1450  
Permanent link to this record