|   | 
Details
   web
Records Links
Author Hassan Ahmed Sial; S. Sancho; Ramon Baldrich; Robert Benavente; Maria Vanrell edit   pdf
url  openurl
Title Color-based data augmentation for Reflectance Estimation Type Conference Article
Year 2018 Publication 26th Color Imaging Conference Abbreviated Journal  
Volume Issue Pages 284-289  
Keywords  
Abstract Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.  
Address Vancouver; November 2018  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) CIC  
Notes CIC Approved no  
Call Number Admin @ si @ SSB2018a Serial 3129  
Permanent link to this record
 

 
Author Robert Benavente; C. Alejandro Parraga; Maria Vanrell edit  url
isbn  openurl
Title La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas Type Conference Article
Year 2010 Publication 9th Congreso Nacional del Color Abbreviated Journal  
Volume Issue Pages 92–95  
Keywords Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos  
Abstract En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto.  
Address Alicante (Spain)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 978-84-9717-144-1 Medium  
Area Expedition Conference (up) CNC  
Notes CIC Approved no  
Call Number CAT @ cat @ BPV2010 Serial 1327  
Permanent link to this record
 

 
Author Robert Benavente; Maria Vanrell edit  openurl
Title Parametrizacion del Espacio de Categorias de Color Type Miscellaneous
Year 2007 Publication Proceedings del VIII Congreso Nacional del Color Abbreviated Journal  
Volume Issue Pages 77–78  
Keywords  
Abstract  
Address Madrid (Spain)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) CNC’07  
Notes CAT;CIC Approved no  
Call Number CAT @ cat @ BeV2007 Serial 905  
Permanent link to this record
 

 
Author Jaime Moreno; Xavier Otazu; Maria Vanrell edit  openurl
Title Contribution of CIWaM in JPEG2000 Quantization for Color Images Type Conference Article
Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
Volume Issue Pages 132–136  
Keywords  
Abstract The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM(ChromaticInductionWaveletModel).  
Address Gjovik (Norway)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) CREATE  
Notes CIC Approved no  
Call Number CAT @ cat @ MOV2010b Serial 1308  
Permanent link to this record
 

 
Author Javier Vazquez; Maria Vanrell; Robert Benavente edit  openurl
Title Color names as a constraint for Computer Vision problems Type Conference Article
Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
Volume Issue Pages 324–328  
Keywords  
Abstract Computer Vision Problems are usually ill-posed. Constraining de gamut of possible solutions is then a necessary step. Many constrains for different problems have been developed during years. In this paper, we present a different way of constraining some of these problems: the use of color names. In particular, we will focus on segmentation, representation ans constancy.  
Address Gjovik (Norway)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) CREATE  
Notes CIC Approved no  
Call Number CAT @ cat @ VVB2010 Serial 1328  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit  openurl
Title Who Painted this Painting? Type Conference Article
Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
Volume Issue Pages 329–333  
Keywords  
Abstract  
Address Gjovik (Norway)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) CREATE  
Notes CIC Approved no  
Call Number CAT @ cat @ KWV2010 Serial 1329  
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga edit   pdf
url  doi
isbn  openurl
Title Saliency Estimation Using a Non-Parametric Low-Level Vision Model Type Conference Article
Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 433-440  
Keywords Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms  
Abstract Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.  
Address Colorado Springs  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium  
Area Expedition Conference (up) CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ MVO2011 Serial 1757  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez edit   pdf
url  doi
isbn  openurl
Title Color Attributes for Object Detection Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 3306-3313  
Keywords pedestrian detection  
Abstract State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
 
Address Providence; Rhode Island; USA;  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference (up) CVPR  
Notes ADAS; CIC; Approved no  
Call Number Admin @ si @ KRW2012 Serial 1935  
Permanent link to this record
 

 
Author Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell edit   pdf
url  doi
isbn  openurl
Title Names and Shades of Color for Intrinsic Image Estimation Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 278-285  
Keywords  
Abstract In the last years, intrinsic image decomposition has gained attention. Most of the state-of-the-art methods are based on the assumption that reflectance changes come along with strong image edges. Recently, user intervention in the recovery problem has proved to be a remarkable source of improvement. In this paper, we propose a novel approach that aims to overcome the shortcomings of pure edge-based methods by introducing strong surface descriptors, such as the color-name descriptor which introduces high-level considerations resembling top-down intervention. We also use a second surface descriptor, termed color-shade, which allows us to include physical considerations derived from the image formation model capturing gradual color surface variations. Both color cues are combined by means of a Markov Random Field. The method is quantitatively tested on the MIT ground truth dataset using different error metrics, achieving state-of-the-art performance.  
Address Providence, Rhode Island  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference (up) CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ SPB2012 Serial 2026  
Permanent link to this record
 

 
Author Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell; Dimitris Samaras edit   pdf
doi  openurl
Title The Photometry of Intrinsic Images Type Conference Article
Year 2014 Publication 27th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 1494-1501  
Keywords  
Abstract Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images.  
Address Columbus; Ohio; USA; June 2014  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) CVPR  
Notes CIC; 600.052; 600.051; 600.074 Approved no  
Call Number Admin @ si @ SPB2014 Serial 2506  
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell edit  url
openurl 
Title Categorical Focal Colours are Structurally Invariant Under Illuminant Changes Type Conference Article
Year 2011 Publication European Conference on Visual Perception Abbreviated Journal  
Volume Issue Pages 196  
Keywords  
Abstract The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Perception 40 Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) ECVP  
Notes CIC Approved no  
Call Number Admin @ si @ RPV2011 Serial 1867  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit  openurl
Title Colour Visual Coding in trained Deep Neural Networks Type Abstract
Year 2016 Publication European Conference on Visual Perception Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract  
Address Barcelona; Spain; August 2016  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) ECVP  
Notes CIC Approved no  
Call Number Admin @ si @ RaV2016b Serial 2895  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit  url
doi  isbn
openurl 
Title Top-Down Color Attention for Object Recognition Type Conference Article
Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal  
Volume Issue Pages 979 - 986  
Keywords  
Abstract Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.  
Address Kyoto, Japan  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1550-5499 ISBN 978-1-4244-4420-5 Medium  
Area Expedition Conference (up) ICCV  
Notes CIC Approved no  
Call Number CAT @ cat @ SWV2009 Serial 1196  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit   pdf
openurl 
Title Color representation in CNNs: parallelisms with biological vision Type Conference Article
Year 2017 Publication ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN
[2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions
of them to quantify color tuning properties of artificial neurons to provide a classification of the network population.
We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first
layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT).
 
Address Venice; Italy; October 2017  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference (up) ICCV-MBCC  
Notes CIC; 600.087; 600.051 Approved no  
Call Number Admin @ si @ RaV2017 Serial 2984  
Permanent link to this record