|   | 
Details
   web
Records
Author C. Alejandro Parraga; Olivier Penacchio; Maria Vanrell
Title Retinal Filtering Matches Natural Image Statistics at Low Luminance Levels Type Journal Article
Year 2011 Publication Perception Abbreviated Journal PER
Volume 40 Issue Pages 96
Keywords
Abstract The assumption that the retina’s main objective is to provide a minimum entropy representation to higher visual areas (ie efficient coding principle) allows to predict retinal filtering in space–time and colour (Atick, 1992 Network 3 213–251). This is achieved by considering the power spectra of natural images (which is proportional to 1/f2) and the suppression of retinal and image noise. However, most studies consider images within a limited range of lighting conditions (eg near noon) whereas the visual system’s spatial filtering depends on light intensity and the spatiochromatic properties of natural scenes depend of the time of the day. Here, we explore whether the dependence of visual spatial filtering on luminance match the changes in power spectrum of natural scenes at different times of the day. Using human cone-activation based naturalistic stimuli (from the Barcelona Calibrated Images Database), we show that for a range of luminance levels, the shape of the retinal CSF reflects the slope of the power spectrum at low spatial frequencies. Accordingly, the retina implements the filtering which best decorrelates the input signal at every luminance level. This result is in line with the body of work that places efficient coding as a guiding neural principle.
Address
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ PPV2011 Serial 1720
Permanent link to this record
 

 
Author Olivier Penacchio
Title Mixed Hodge Structures and Equivariant Sheaves on the Projective Plane Type Journal Article
Year 2011 Publication Mathematische Nachrichten Abbreviated Journal MN
Volume 284 Issue 4 Pages 526-542
Keywords Mixed Hodge structures, equivariant sheaves, MSC (2010) Primary: 14C30, Secondary: 14F05, 14M25
Abstract We describe an equivalence of categories between the category of mixed Hodge structures and a category of equivariant vector bundles on a toric model of the complex projective plane which verify some semistability condition. We then apply this correspondence to define an invariant which generalizes the notion of R-split mixed Hodge structure and give calculations for the first group of cohomology of possibly non smooth or non-complete curves of genus 0 and 1. Finally, we describe some extension groups of mixed Hodge structures in terms of equivariant extensions of coherent sheaves. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Address
Corporate Author Thesis
Publisher WILEY-VCH Verlag Place of Publication (down) Editor R. Mennicken
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1522-2616 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ Pen2011 Serial 1721
Permanent link to this record
 

 
Author Maria Vanrell; Naila Murray; Robert Benavente; C. Alejandro Parraga; Xavier Otazu; Ramon Baldrich
Title Perception Based Representations for Computational Colour Type Conference Article
Year 2011 Publication 3rd International Workshop on Computational Color Imaging Abbreviated Journal
Volume 6626 Issue Pages 16-30
Keywords colour perception, induction, naming, psychophysical data, saliency, segmentation
Abstract The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.
Address Milan, Italy
Corporate Author Thesis
Publisher Springer-Verlag Place of Publication (down) Editor Raimondo Schettini, Shoji Tominaga, Alain Trémeau
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-642-20403-6 Medium
Area Expedition Conference CCIW
Notes CIC Approved no
Call Number Admin @ si @ VMB2011 Serial 1733
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga
Title Saliency Estimation Using a Non-Parametric Low-Level Vision Model Type Conference Article
Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 433-440
Keywords Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms
Abstract Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.
Address Colorado Springs
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium
Area Expedition Conference CVPR
Notes CIC Approved no
Call Number Admin @ si @ MVO2011 Serial 1757
Permanent link to this record
 

 
Author Jordi Roca; A.Owen; G.Jordan; Y.Ling; C. Alejandro Parraga; A.Hurlbert
Title Inter-individual Variations in Color Naming and the Structure of 3D Color Space Type Abstract
Year 2011 Publication Journal of Vision Abbreviated Journal VSS
Volume 12 Issue 2 Pages 166
Keywords
Abstract 36.307
Many everyday behavioural uses of color vision depend on color naming ability, which is neither measured nor predicted by most standardized tests of color vision, for either normal or anomalous color vision. Here we demonstrate a new method to quantify color naming ability by deriving a compact computational description of individual 3D color spaces. Methods: Individual observers underwent standardized color vision diagnostic tests (including anomaloscope testing) and a series of custom-made color naming tasks using 500 distinct color samples, either CRT stimuli (“light”-based) or Munsell chips (“surface”-based), with both forced- and free-choice color naming paradigms. For each subject, we defined his/her color solid as the set of 3D convex hulls computed for each basic color category from the relevant collection of categorised points in perceptually uniform CIELAB space. From the parameters of the convex hulls, we derived several indices to characterise the 3D structure of the color solid and its inter-individual variations. Using a reference group of 25 normal trichromats (NT), we defined the degree of normality for the shape, location and overlap of each color region, and the extent of “light”-“surface” agreement. Results: Certain features of color perception emerge from analysis of the average NT color solid, e.g.: (1) the white category is slightly shifted towards blue; and (2) the variability in category border location across NT subjects is asymmetric across color space, with least variability in the blue/green region. Comparisons between individual and average NT indices reveal specific naming “deficits”, e.g.: (1) Category volumes for white, green, brown and grey are expanded for anomalous trichromats and dichromats; and (2) the focal structure of color space is disrupted more in protanopia than other forms of anomalous color vision. The indices both capture the structure of subjective color spaces and allow us to quantify inter-individual differences in color naming ability.
Address
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1534-7362 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ ROJ2011 Serial 1758
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Maria Vanrell
Title Do Basic Colors Influence Chromatic Adaptation? Type Journal Article
Year 2011 Publication Journal of Vision Abbreviated Journal VSS
Volume 11 Issue 11 Pages 85
Keywords
Abstract Color constancy (the ability to perceive colors relatively stable under different illuminants) is the result of several mechanisms spread across different neural levels and responding to several visual scene cues. It is usually measured by estimating the perceived color of a grey patch under an illuminant change. In this work, we hypothesize whether chromatic adaptation (without a reference white or grey) could be driven by certain colors, specifically those corresponding to the universal color terms proposed by Berlin and Kay (1969). To this end we have developed a new psychophysical paradigm in which subjects adjust the color of a test patch (in CIELab space) to match their memory of the best example of a given color chosen from the universal terms list (grey, red, green, blue, yellow, purple, pink, orange and brown). The test patch is embedded inside a Mondrian image and presented on a calibrated CRT screen inside a dark cabin. All subjects were trained to “recall” their most exemplary colors reliably from memory and asked to always produce the same basic colors when required under several adaptation conditions. These include achromatic and colored Mondrian backgrounds, under a simulated D65 illuminant and several colored illuminants. A set of basic colors were measured for each subject under neutral conditions (achromatic background and D65 illuminant) and used as “reference” for the rest of the experiment. The colors adjusted by the subjects in each adaptation condition were compared to the reference colors under the corresponding illuminant and a “constancy index” was obtained for each of them. Our results show that for some colors the constancy index was better than for grey. The set of best adapted colors in each condition were common to a majority of subjects and were dependent on the chromaticity of the illuminant and the chromatic background considered.
Address
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1534-7362 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ PRV2011 Serial 1759
Permanent link to this record
 

 
Author Noha Elfiky; Fahad Shahbaz Khan; Joost Van de Weijer; Jordi Gonzalez
Title Discriminative Compact Pyramids for Object and Scene Recognition Type Journal Article
Year 2012 Publication Pattern Recognition Abbreviated Journal PR
Volume 45 Issue 4 Pages 1627-1636
Keywords
Abstract Spatial pyramids have been successfully applied to incorporating spatial information into bag-of-words based image representation. However, a major drawback is that it leads to high dimensional image representations. In this paper, we present a novel framework for obtaining compact pyramid representation. First, we investigate the usage of the divisive information theoretic feature clustering (DITC) algorithm in creating a compact pyramid representation. In many cases this method allows us to reduce the size of a high dimensional pyramid representation up to an order of magnitude with little or no loss in accuracy. Furthermore, comparison to clustering based on agglomerative information bottleneck (AIB) shows that our method obtains superior results at significantly lower computational costs. Moreover, we investigate the optimal combination of multiple features in the context of our compact pyramid representation. Finally, experiments show that the method can obtain state-of-the-art results on several challenging data sets.
Address
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0031-3203 ISBN Medium
Area Expedition Conference
Notes ISE; CAT;CIC Approved no
Call Number Admin @ si @ EKW2012 Serial 1807
Permanent link to this record
 

 
Author Joost Van de Weijer; Shida Beigpour
Title The Dichromatic Reflection Model: Future Research Directions and Applications Type Conference Article
Year 2011 Publication International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal
Volume Issue Pages
Keywords dblp
Abstract The dichromatic reflection model (DRM) predicts that color distributions form a parallelogram in color space, whose shape is defined by the body reflectance and the illuminant color. In this paper we resume the assumptions which led to the DRM and shortly recall two of its main applications domains: color image segmentation and photometric invariant feature computation. After having introduced the model we discuss several limitations of the theory, especially those which are raised once working on real-world uncalibrated images. In addition, we summerize recent extensions of the model which allow to handle more complicated light interactions. Finally, we suggest some future research directions which would further extend its applicability.
Address Algarve, Portugal
Corporate Author Thesis
Publisher SciTePress Place of Publication (down) Editor Mestetskiy, Leonid and Braz, José
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989-8425-47-8 Medium
Area Expedition Conference VISIGRAPP
Notes CIC Approved no
Call Number Admin @ si @ WeB2011 Serial 1778
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer
Title Object Recoloring Based on Intrinsic Image Estimation Type Conference Article
Year 2011 Publication 13th IEEE International Conference in Computer Vision Abbreviated Journal
Volume Issue Pages 327 - 334
Keywords
Abstract Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium
Area Expedition Conference ICCV
Notes CIC Approved no
Call Number Admin @ si @ BeW2011 Serial 1781
Permanent link to this record
 

 
Author Javier Vazquez
Title Colour Constancy in Natural Through Colour Naming and Sensor Sharpening Type Book Whole
Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Colour is derived from three physical properties: incident light, object reflectance and sensor sensitivities. Incident light varies under natural conditions; hence, recovering scene illuminant is an important issue in computational colour. One way to deal with this problem under calibrated conditions is by following three steps, 1) building a narrow-band sensor basis to accomplish the diagonal model, 2) building a feasible set of illuminants, and 3) defining criteria to select the best illuminant. In this work we focus on colour constancy for natural images by introducing perceptual criteria in the first and third stages.
To deal with the illuminant selection step, we hypothesise that basic colour categories can be used as anchor categories to recover the best illuminant. These colour names are related to the way that the human visual system has evolved to encode relevant natural colour statistics. Therefore the recovered image provides the best representation of the scene labelled with the basic colour terms. We demonstrate with several experiments how this selection criterion achieves current state-of-art results in computational colour constancy. In addition to this result, we psychophysically prove that usual angular error used in colour constancy does not correlate with human preferences, and we propose a new perceptual colour constancy evaluation.
The implementation of this selection criterion strongly relies on the use of a diagonal
model for illuminant change. Consequently, the second contribution focuses on building an appropriate narrow-band sensor basis to represent natural images. We propose to use the spectral sharpening technique to compute a unique narrow-band basis optimised to represent a large set of natural reflectances under natural illuminants and given in the basis of human cones. The proposed sensors allow predicting unique hues and the World colour Survey data independently of the illuminant by using a compact singularity function. Additionally, we studied different families of sharp sensors to minimise different perceptual measures. This study brought us to extend the spherical sampling procedure from 3D to 6D.
Several research lines still remain open. One natural extension would be to measure the
effects of using the computed sharp sensors on the category hypothesis, while another might be to insert spatial contextual information to improve category hypothesis. Finally, much work still needs to be done to explore how individual sensors can be adjusted to the colours in a scene.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication (down) Editor Maria Vanrell;Graham D. Finlayson
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ Vaz2011a Serial 1785
Permanent link to this record
 

 
Author Jaime Moreno
Title Perceptual Criteria on Image Compresions Type Book Whole
Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication (down) Editor Xavier Otazu
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-938351-3-2 Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ Mor2011 Serial 1786
Permanent link to this record
 

 
Author Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu
Title Low-dimensional and Comprehensive Color Texture Description Type Journal Article
Year 2012 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU
Volume 116 Issue I Pages 54-67
Keywords
Abstract Image retrieval can be dealt by combining standard descriptors, such as those of MPEG-7, which are defined independently for each visual cue (e.g. SCD or CLD for Color, HTD for texture or EHD for edges).
A common problem is to combine similarities coming from descriptors representing different concepts in different spaces. In this paper we propose a color texture description that bypasses this problem from its inherent definition. It is based on a low dimensional space with 6 perceptual axes. Texture is described in a 3D space derived from a direct implementation of the original Julesz’s Texton theory and color is described in a 3D perceptual space. This early fusion through the blob concept in these two bounded spaces avoids the problem and allows us to derive a sparse color-texture descriptor that achieves similar performance compared to MPEG-7 in image retrieval. Moreover, our descriptor presents comprehensive qualities since it can also be applied either in segmentation or browsing: (a) a dense image representation is defined from the descriptor showing a reasonable performance in locating texture patterns included in complex images; and (b) a vocabulary of basic terms is derived to build an intermediate level descriptor in natural language improving browsing by bridging semantic gap
Address
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1077-3142 ISBN Medium
Area Expedition Conference
Notes CAT;CIC Approved no
Call Number Admin @ si @ ASV2012 Serial 1827
Permanent link to this record
 

 
Author Eduard Vazquez
Title Unsupervised image segmentation based on material reflectance description and saliency Type Book Whole
Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Image segmentations aims to partition an image into a set of non-overlapped regions, called segments. Despite the simplicity of the definition, image segmentation raises as a very complex problem in all its stages. The definition of segment is still unclear. When asking to a human to perform a segmentation, this person segments at different levels of abstraction. Some segments might be a single, well-defined texture whereas some others correspond with an object in the scene which might including multiple textures and colors. For this reason, segmentation is divided in bottom-up segmentation and top-down segmentation. Bottom up-segmentation is problem independent, that is, focused on general properties of the images such as textures or illumination. Top-down segmentation is a problem-dependent approach which looks for specific entities in the scene, such as known objects. This work is focused on bottom-up segmentation. Beginning from the analysis of the lacks of current methods, we propose an approach called RAD. Our approach overcomes the main shortcomings of those methods which use the physics of the light to perform the segmentation. RAD is a topological approach which describes a single-material reflectance. Afterwards, we cope with one of the main problems in image segmentation: non supervised adaptability to image content. To yield a non-supervised method, we use a model of saliency yet presented in this thesis. It computes the saliency of the chromatic transitions of an image by means of a statistical analysis of the images derivatives. This method of saliency is used to build our final approach of segmentation: spRAD. This method is a non-supervised segmentation approach. Our saliency approach has been validated with a psychophysical experiment as well as computationally, overcoming a state-of-the-art saliency method. spRAD also outperforms state-of-the-art segmentation techniques as results obtained with a widely-used segmentation dataset show
Address
Corporate Author Thesis Ph.D. thesis
Publisher Place of Publication (down) Editor Ramon Baldrich
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ Vaz2011b Serial 1835
Permanent link to this record
 

 
Author Fahad Shahbaz Khan
Title Coloring bag-of-words based image representations Type Book Whole
Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Put succinctly, the bag-of-words based image representation is the most successful approach for object and scene recognition. Within the bag-of-words framework the optimal fusion of multiple cues, such as shape, texture and color, still remains an active research domain. There exist two main approaches to combine color and shape information within the bag-of-words framework. The first approach called, early fusion, fuses color and shape at the feature level as a result of which a joint colorshape vocabulary is produced. The second approach, called late fusion, concatenates histogram representation of both color and shape, obtained independently. In the first part of this thesis, we analyze the theoretical implications of both early and late feature fusion. We demonstrate that both these approaches are suboptimal for a subset of object categories. Consequently, we propose a novel method for recognizing object categories when using multiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category specific color attention. Color is used to compute bottom-up and top-down attention maps. Subsequently, the color attention maps are used to modulate the weights of the shape features. Shape features are given more weight in regions with higher attention and vice versa. The approach is tested on several benchmark object recognition data sets and the results clearly demonstrate the effectiveness of our proposed method. In the second part of the thesis, we investigate the problem of obtaining compact spatial pyramid representations for object and scene recognition. Spatial pyramids have been successfully applied to incorporate spatial information into bag-of-words based image representation. However, a major drawback of spatial pyramids is that it leads to high dimensional image representations. We present a novel framework for obtaining compact pyramid representation. The approach reduces the size of a high dimensional pyramid representation upto an order of magnitude without any significant reduction in accuracy. Moreover, we also investigate the optimal combination of multiple features such as color and shape within the context of our compact pyramid representation. Finally, we describe a novel technique to build discriminative visual words from multiple cues learned independently from training images. To this end, we use an information theoretic vocabulary compression technique to find discriminative combinations of visual cues and the resulting visual vocabulary is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. The approach is tested on standard object recognition data sets. The results obtained clearly demonstrate the effectiveness of our approach.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Place of Publication (down) Editor Joost Van de Weijer;Maria Vanrell
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ Kha2011 Serial 1838
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer
Title Improving Color Constancy by Photometric Edge Weighting Type Journal Article
Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 34 Issue 5 Pages 918-929
Keywords
Abstract : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.
Address Los Alamitos; CA; USA;
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes CIC;ISE Approved no
Call Number Admin @ si @ GGW2012 Serial 1850
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez
Title Color Attributes for Object Detection Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 3306-3313
Keywords pedestrian detection
Abstract State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
Address Providence; Rhode Island; USA;
Corporate Author Thesis
Publisher IEEE Xplore Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium
Area Expedition Conference CVPR
Notes ADAS; CIC; Approved no
Call Number Admin @ si @ KRW2012 Serial 1935
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell
Title Modulating Shape Features by Color Attention for Object Recognition Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 98 Issue 1 Pages 49-64
Keywords
Abstract Bag-of-words based image representation is a successful approach for object recognition. Generally, the subsequent stages of the process: feature detection,feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, it was found that the combination of different image cues, such as shape and color, often obtains below expected results. This paper presents a novel method for recognizing object categories when using ultiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category specific color attention. Color is used to compute bottom up and top-down attention maps. Subsequently, these color attention maps are used to modulate the weights of the shape features. In regions with higher attention shape features are given more weight than in regions with low attention. We compare our approach with existing methods that combine color and shape cues on five data sets containing varied importance of both cues, namely, Soccer (color predominance), Flower (color and hape parity), PASCAL VOC 2007 and 2009 (shape predominance) and Caltech-101 (color co-interference). The experiments clearly demonstrate that in all five data sets our proposed framework significantly outperforms existing methods for combining color and shape information.
Address
Corporate Author Thesis
Publisher Springer Netherlands Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ KWV2012 Serial 1864
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell
Title Portmanteau Vocabularies for Multi-Cue Image Representation Type Conference Article
Year 2011 Publication 25th Annual Conference on Neural Information Processing Systems Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation
Address
Corporate Author Thesis
Publisher Place of Publication (down) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference NIPS
Notes CIC Approved no
Call Number Admin @ si @ KWB2011 Serial 1865
Permanent link to this record