|   | 
Details
   web
Records Links
Author Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu edit  doi
isbn  openurl
Title Perceptual color texture codebooks for retrieving in highly diverse texture datasets Type Conference Article
Year 2010 Publication 20th International Conference on Pattern Recognition Abbreviated Journal  
Volume Issue Pages 866–869  
Keywords  
Abstract Color and texture are visual cues of different nature, their integration in a useful visual descriptor is not an obvious step. One way to combine both features is to compute texture descriptors independently on each color channel. A second way is integrate the features at a descriptor level, in this case arises the problem of normalizing both cues. A significant progress in the last years in object recognition has provided the bag-of-words framework that again deals with the problem of feature combination through the definition of vocabularies of visual words. Inspired in this framework, here we present perceptual textons that will allow to fuse color and texture at the level of p-blobs, which is our feature detection step. Feature representation is based on two uniform spaces representing the attributes of the p-blobs. The low-dimensionality of these text on spaces will allow to bypass the usual problems of previous approaches. Firstly, no need for normalization between cues; and secondly, vocabularies are directly obtained from the perceptual properties of text on spaces without any learning step. Our proposal improve current state-of-art of color-texture descriptors in an image retrieval experiment over a highly diverse texture dataset from Corel.  
Address Istanbul (Turkey)  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
Area Expedition Conference ICPR  
Notes CIC Approved no  
Call Number CAT @ cat @ ASV2010b Serial 1426  
Permanent link to this record
 

 
Author Xavier Otazu; C. Alejandro Parraga; Maria Vanrell edit  url
doi  openurl
Title Towards a unified chromatic inducction model Type Journal Article
Year 2010 Publication Journal of Vision Abbreviated Journal VSS  
Volume 10 Issue 12:5 Pages 1-24  
Keywords Visual system; Color induction; Wavelet transform  
Abstract In a previous work (X. Otazu, M. Vanrell, & C. A. Párraga, 2008b), we showed how several brightness induction effects can be predicted using a simple multiresolution wavelet model (BIWaM). Here we present a new model for chromatic induction processes (termed Chromatic Induction Wavelet Model or CIWaM), which is also implemented on a multiresolution framework and based on similar assumptions related to the spatial frequency and the contrast surround energy of the stimulus. The CIWaM can be interpreted as a very simple extension of the BIWaM to the chromatic channels, which in our case are defined in the MacLeod-Boynton (lsY) color space. This new model allows us to unify both chromatic assimilation and chromatic contrast effects in a single mathematical formulation. The predictions of the CIWaM were tested by means of several color and brightness induction experiments, which showed an acceptable agreement between model predictions and psychophysical data.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number CAT @ cat @ OPV2010 Serial 1450  
Permanent link to this record
 

 
Author Enric Marti; Jordi Rocarias; Ricardo Toledo edit  openurl
Title Caront: gestió flexible de grups d’alumnes en una asignatura i activitats sobre grups. Nova activitat de control Type Miscellaneous
Year 2008 Publication V Jornades d’Innovació Docent Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes IAM;RV;CIC;ADAS Approved no  
Call Number IAM @ iam @ MRT2008a Serial 1617  
Permanent link to this record
 

 
Author Ernest Valveny; Ricardo Toledo; Ramon Baldrich; Enric Marti edit  openurl
Title Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching Type Conference Article
Year 2002 Publication Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002 Abbreviated Journal  
Volume Issue Pages 502–507  
Keywords  
Abstract  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes DAG;RV;CAT;IAM;CIC;ADAS Approved no  
Call Number IAM @ iam @ VTB2002 Serial 1660  
Permanent link to this record
 

 
Author Eduard Vazquez; Ramon Baldrich; Joost Van de Weijer; Maria Vanrell edit   pdf
url  doi
openurl 
Title Describing Reflectances for Colour Segmentation Robust to Shadows, Highlights and Textures Type Journal Article
Year 2011 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
Volume 33 Issue 5 Pages 917-930  
Keywords  
Abstract The segmentation of a single material reflectance is a challenging problem due to the considerable variation in image measurements caused by the geometry of the object, shadows, and specularities. The combination of these effects has been modeled by the dichromatic reflection model. However, the application of the model to real-world images is limited due to unknown acquisition parameters and compression artifacts. In this paper, we present a robust model for the shape of a single material reflectance in histogram space. The method is based on a multilocal creaseness analysis of the histogram which results in a set of ridges representing the material reflectances. The segmentation method derived from these ridges is robust to both shadow, shading and specularities, and texture in real-world images. We further complete the method by incorporating prior knowledge from image statistics, and incorporate spatial coherence by using multiscale color contrast information. Results obtained show that our method clearly outperforms state-of-the-art segmentation methods on a widely used segmentation benchmark, having as a main characteristic its excellent performance in the presence of shadows and highlights at low computational cost.  
Address Los Alamitos; CA; USA;  
Corporate Author Thesis  
Publisher IEEE Computer Society Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0162-8828 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ VBW2011 Serial 1715  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit   pdf
url  doi
openurl 
Title Computational Color Constancy: Survey and Experiments Type Journal Article
Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
Volume 20 Issue 9 Pages 2475-2489  
Keywords computational color constancy;computer vision application;gamut-based method;learning-based method;static method;colour vision;computer vision;image colour analysis;learning (artificial intelligence);lighting  
Abstract Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-the- art methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available data sets. Finally, various freely available methods, of which some are considered to be state-of-the-art, are evaluated on two data sets.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1057-7149 ISBN Medium  
Area Expedition Conference  
Notes ISE;CIC Approved no  
Call Number Admin @ si @ GGW2011 Serial 1717  
Permanent link to this record
 

 
Author Xavier Boix; Josep M. Gonfaus; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez edit   pdf
url  doi
openurl 
Title Harmony Potentials: Fusing Global and Local Scale for Semantic Image Segmentation Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
Volume 96 Issue 1 Pages 83-102  
Keywords  
Abstract The Hierarchical Conditional Random Field(HCRF) model have been successfully applied to a number of image labeling problems, including image segmentation. However, existing HCRF models of image segmentation do not allow multiple classes to be assigned to a single region, which limits their ability to incorporate contextual information across multiple scales.
At higher scales in the image, this representation yields an oversimpli ed model since multiple classes can be reasonably expected to appear within large regions. This simpli ed model particularly limits the impact of information at higher scales. Since class-label information at these scales is usually more reliable than at lower, noisier scales, neglecting this information is undesirable. To
address these issues, we propose a new consistency potential for image labeling problems, which we call the harmony potential. It can encode any possible combi-
nation of labels, penalizing only unlikely combinations of classes. We also propose an e ective sampling strategy over this expanded label set that renders tractable the underlying optimization problem. Our approach obtains state-of-the-art results on two challenging, standard benchmark datasets for semantic image segmentation: PASCAL VOC 2010, and MSRC-21.
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0920-5691 ISBN Medium  
Area Expedition Conference  
Notes ISE;CIC;ADAS Approved no  
Call Number Admin @ si @ BGW2012 Serial 1718  
Permanent link to this record
 

 
Author Olivier Penacchio; C. Alejandro Parraga edit  url
openurl 
Title What is the best criterion for an efficient design of retinal photoreceptor mosaics? Type Journal Article
Year 2011 Publication Perception Abbreviated Journal PER  
Volume 40 Issue Pages 197  
Keywords  
Abstract The proportions of L, M and S photoreceptors in the primate retina are arguably determined by evolutionary pressure and the statistics of the visual environment. Two information theory-based approaches have been recently proposed for explaining the asymmetrical spatial densities of photoreceptors in humans. In the first approach Garrigan et al (2010 PLoS ONE 6 e1000677), a model for computing the information transmitted by cone arrays which considers the differential blurring produced by the long-wavelength accommodation of the eye’s lens is proposed. Their results explain the sparsity of S-cones but the optimum depends weakly on the L:M cone ratio. In the second approach (Penacchio et al, 2010 Perception 39 ECVP Supplement, 101), we show that human cone arrays make the visual representation scale-invariant, allowing the total entropy of the signal to be preserved while decreasing individual neurons’ entropy in further retinotopic representations. This criterion provides a thorough description of the distribution of L:M cone ratios and does not depend on differential blurring of the signal by the lens. Here, we investigate the similarities and differences of both approaches when applied to the same database. Our results support a 2-criteria optimization in the space of cone ratios whose components are arguably important and mostly unrelated.
[This work was partially funded by projects TIN2010-21771-C02-1 and Consolider-Ingenio 2010-CSD2007-00018 from the Spanish MICINN. CAP was funded by grant RYC-2007-00484]
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ PeP2011a Serial 1719  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Olivier Penacchio; Maria Vanrell edit  openurl
Title Retinal Filtering Matches Natural Image Statistics at Low Luminance Levels Type Journal Article
Year 2011 Publication Perception Abbreviated Journal PER  
Volume 40 Issue Pages 96  
Keywords  
Abstract The assumption that the retina’s main objective is to provide a minimum entropy representation to higher visual areas (ie efficient coding principle) allows to predict retinal filtering in space–time and colour (Atick, 1992 Network 3 213–251). This is achieved by considering the power spectra of natural images (which is proportional to 1/f2) and the suppression of retinal and image noise. However, most studies consider images within a limited range of lighting conditions (eg near noon) whereas the visual system’s spatial filtering depends on light intensity and the spatiochromatic properties of natural scenes depend of the time of the day. Here, we explore whether the dependence of visual spatial filtering on luminance match the changes in power spectrum of natural scenes at different times of the day. Using human cone-activation based naturalistic stimuli (from the Barcelona Calibrated Images Database), we show that for a range of luminance levels, the shape of the retinal CSF reflects the slope of the power spectrum at low spatial frequencies. Accordingly, the retina implements the filtering which best decorrelates the input signal at every luminance level. This result is in line with the body of work that places efficient coding as a guiding neural principle.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ PPV2011 Serial 1720  
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga edit   pdf
url  doi
isbn  openurl
Title Saliency Estimation Using a Non-Parametric Low-Level Vision Model Type Conference Article
Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 433-440  
Keywords Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms  
Abstract Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.  
Address Colorado Springs  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium  
Area Expedition Conference CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ MVO2011 Serial 1757  
Permanent link to this record
 

 
Author Jordi Roca; A.Owen; G.Jordan; Y.Ling; C. Alejandro Parraga; A.Hurlbert edit  url
doi  openurl
Title Inter-individual Variations in Color Naming and the Structure of 3D Color Space Type Abstract
Year 2011 Publication Journal of Vision Abbreviated Journal VSS  
Volume 12 Issue 2 Pages 166  
Keywords  
Abstract 36.307
Many everyday behavioural uses of color vision depend on color naming ability, which is neither measured nor predicted by most standardized tests of color vision, for either normal or anomalous color vision. Here we demonstrate a new method to quantify color naming ability by deriving a compact computational description of individual 3D color spaces. Methods: Individual observers underwent standardized color vision diagnostic tests (including anomaloscope testing) and a series of custom-made color naming tasks using 500 distinct color samples, either CRT stimuli (“light”-based) or Munsell chips (“surface”-based), with both forced- and free-choice color naming paradigms. For each subject, we defined his/her color solid as the set of 3D convex hulls computed for each basic color category from the relevant collection of categorised points in perceptually uniform CIELAB space. From the parameters of the convex hulls, we derived several indices to characterise the 3D structure of the color solid and its inter-individual variations. Using a reference group of 25 normal trichromats (NT), we defined the degree of normality for the shape, location and overlap of each color region, and the extent of “light”-“surface” agreement. Results: Certain features of color perception emerge from analysis of the average NT color solid, e.g.: (1) the white category is slightly shifted towards blue; and (2) the variability in category border location across NT subjects is asymmetric across color space, with least variability in the blue/green region. Comparisons between individual and average NT indices reveal specific naming “deficits”, e.g.: (1) Category volumes for white, green, brown and grey are expanded for anomalous trichromats and dichromats; and (2) the focal structure of color space is disrupted more in protanopia than other forms of anomalous color vision. The indices both capture the structure of subjective color spaces and allow us to quantify inter-individual differences in color naming ability.
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1534-7362 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ ROJ2011 Serial 1758  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Maria Vanrell edit  url
doi  openurl
Title Do Basic Colors Influence Chromatic Adaptation? Type Journal Article
Year 2011 Publication Journal of Vision Abbreviated Journal VSS  
Volume 11 Issue 11 Pages 85  
Keywords  
Abstract Color constancy (the ability to perceive colors relatively stable under different illuminants) is the result of several mechanisms spread across different neural levels and responding to several visual scene cues. It is usually measured by estimating the perceived color of a grey patch under an illuminant change. In this work, we hypothesize whether chromatic adaptation (without a reference white or grey) could be driven by certain colors, specifically those corresponding to the universal color terms proposed by Berlin and Kay (1969). To this end we have developed a new psychophysical paradigm in which subjects adjust the color of a test patch (in CIELab space) to match their memory of the best example of a given color chosen from the universal terms list (grey, red, green, blue, yellow, purple, pink, orange and brown). The test patch is embedded inside a Mondrian image and presented on a calibrated CRT screen inside a dark cabin. All subjects were trained to “recall” their most exemplary colors reliably from memory and asked to always produce the same basic colors when required under several adaptation conditions. These include achromatic and colored Mondrian backgrounds, under a simulated D65 illuminant and several colored illuminants. A set of basic colors were measured for each subject under neutral conditions (achromatic background and D65 illuminant) and used as “reference” for the rest of the experiment. The colors adjusted by the subjects in each adaptation condition were compared to the reference colors under the corresponding illuminant and a “constancy index” was obtained for each of them. Our results show that for some colors the constancy index was better than for grey. The set of best adapted colors in each condition were common to a majority of subjects and were dependent on the chromaticity of the illuminant and the chromatic background considered.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1534-7362 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ PRV2011 Serial 1759  
Permanent link to this record
 

 
Author Noha Elfiky; Fahad Shahbaz Khan; Joost Van de Weijer; Jordi Gonzalez edit   pdf
url  doi
openurl 
Title Discriminative Compact Pyramids for Object and Scene Recognition Type Journal Article
Year 2012 Publication Pattern Recognition Abbreviated Journal PR  
Volume 45 Issue 4 Pages 1627-1636  
Keywords  
Abstract Spatial pyramids have been successfully applied to incorporating spatial information into bag-of-words based image representation. However, a major drawback is that it leads to high dimensional image representations. In this paper, we present a novel framework for obtaining compact pyramid representation. First, we investigate the usage of the divisive information theoretic feature clustering (DITC) algorithm in creating a compact pyramid representation. In many cases this method allows us to reduce the size of a high dimensional pyramid representation up to an order of magnitude with little or no loss in accuracy. Furthermore, comparison to clustering based on agglomerative information bottleneck (AIB) shows that our method obtains superior results at significantly lower computational costs. Moreover, we investigate the optimal combination of multiple features in the context of our compact pyramid representation. Finally, experiments show that the method can obtain state-of-the-art results on several challenging data sets.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0031-3203 ISBN Medium  
Area Expedition Conference  
Notes ISE; CAT;CIC Approved no  
Call Number Admin @ si @ EKW2012 Serial 1807  
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer edit   pdf
url  doi
isbn  openurl
Title Object Recoloring Based on Intrinsic Image Estimation Type Conference Article
Year 2011 Publication 13th IEEE International Conference in Computer Vision Abbreviated Journal  
Volume Issue Pages 327 - 334  
Keywords  
Abstract Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods.  
Address Barcelona  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium  
Area Expedition Conference ICCV  
Notes CIC Approved no  
Call Number Admin @ si @ BeW2011 Serial 1781  
Permanent link to this record
 

 
Author Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu edit   pdf
url  doi
openurl 
Title Low-dimensional and Comprehensive Color Texture Description Type Journal Article
Year 2012 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
Volume 116 Issue I Pages 54-67  
Keywords  
Abstract Image retrieval can be dealt by combining standard descriptors, such as those of MPEG-7, which are defined independently for each visual cue (e.g. SCD or CLD for Color, HTD for texture or EHD for edges).
A common problem is to combine similarities coming from descriptors representing different concepts in different spaces. In this paper we propose a color texture description that bypasses this problem from its inherent definition. It is based on a low dimensional space with 6 perceptual axes. Texture is described in a 3D space derived from a direct implementation of the original Julesz’s Texton theory and color is described in a 3D perceptual space. This early fusion through the blob concept in these two bounded spaces avoids the problem and allows us to derive a sparse color-texture descriptor that achieves similar performance compared to MPEG-7 in image retrieval. Moreover, our descriptor presents comprehensive qualities since it can also be applied either in segmentation or browsing: (a) a dense image representation is defined from the descriptor showing a reasonable performance in locating texture patterns included in complex images; and (b) a vocabulary of basic terms is derived to build an intermediate level descriptor in natural language improving browsing by bridging semantic gap
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1077-3142 ISBN Medium  
Area Expedition Conference  
Notes CAT;CIC Approved no  
Call Number Admin @ si @ ASV2012 Serial 1827  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit   pdf
url  doi
openurl 
Title Improving Color Constancy by Photometric Edge Weighting Type Journal Article
Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
Volume 34 Issue 5 Pages 918-929  
Keywords  
Abstract : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.  
Address Los Alamitos; CA; USA;  
Corporate Author Thesis  
Publisher Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0162-8828 ISBN Medium  
Area Expedition Conference  
Notes CIC;ISE Approved no  
Call Number Admin @ si @ GGW2012 Serial 1850  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez edit   pdf
url  doi
isbn  openurl
Title Color Attributes for Object Detection Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 3306-3313  
Keywords pedestrian detection  
Abstract State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
 
Address Providence; Rhode Island; USA;  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference CVPR  
Notes ADAS; CIC; Approved no  
Call Number Admin @ si @ KRW2012 Serial 1935  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit   pdf
url  doi
openurl 
Title Modulating Shape Features by Color Attention for Object Recognition Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
Volume 98 Issue 1 Pages 49-64  
Keywords  
Abstract Bag-of-words based image representation is a successful approach for object recognition. Generally, the subsequent stages of the process: feature detection,feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, it was found that the combination of different image cues, such as shape and color, often obtains below expected results. This paper presents a novel method for recognizing object categories when using ultiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category specific color attention. Color is used to compute bottom up and top-down attention maps. Subsequently, these color attention maps are used to modulate the weights of the shape features. In regions with higher attention shape features are given more weight than in regions with low attention. We compare our approach with existing methods that combine color and shape cues on five data sets containing varied importance of both cues, namely, Soccer (color predominance), Flower (color and hape parity), PASCAL VOC 2007 and 2009 (shape predominance) and Caltech-101 (color co-interference). The experiments clearly demonstrate that in all five data sets our proposed framework significantly outperforms existing methods for combining color and shape information.  
Address  
Corporate Author Thesis  
Publisher Springer Netherlands Place of Publication Editor (up)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0920-5691 ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ KWV2012 Serial 1864  
Permanent link to this record