C. Alejandro Parraga, Olivier Penacchio, & Maria Vanrell. (2011). Retinal Filtering Matches Natural Image Statistics at Low Luminance Levels. PER - Perception, 40, 96.
Abstract: The assumption that the retina’s main objective is to provide a minimum entropy representation to higher visual areas (ie efficient coding principle) allows to predict retinal filtering in space–time and colour (Atick, 1992 Network 3 213–251). This is achieved by considering the power spectra of natural images (which is proportional to 1/f2) and the suppression of retinal and image noise. However, most studies consider images within a limited range of lighting conditions (eg near noon) whereas the visual system’s spatial filtering depends on light intensity and the spatiochromatic properties of natural scenes depend of the time of the day. Here, we explore whether the dependence of visual spatial filtering on luminance match the changes in power spectrum of natural scenes at different times of the day. Using human cone-activation based naturalistic stimuli (from the Barcelona Calibrated Images Database), we show that for a range of luminance levels, the shape of the retinal CSF reflects the slope of the power spectrum at low spatial frequencies. Accordingly, the retina implements the filtering which best decorrelates the input signal at every luminance level. This result is in line with the body of work that places efficient coding as a guiding neural principle.
|
Joost Van de Weijer, Robert Benavente, Maria Vanrell, Cordelia Schmid, Ramon Baldrich, Jacob Verbeek, et al. (2012). Color Naming. In Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, & Jan-Mark Geusebroek (Eds.), Color in Computer Vision: Fundamentals and Applications (pp. 287–317). John Wiley & Sons, Ltd.
|
Ivet Rafegas, & Maria Vanrell. (2016). Color spaces emerging from deep convolutional networks. In 24th Color and Imaging Conference (pp. 225–230).
Abstract: Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers.
|
Ivet Rafegas, & Maria Vanrell. (2016). Colour Visual Coding in trained Deep Neural Networks. In European Conference on Visual Perception.
|
Ivet Rafegas, & Maria Vanrell. (2017). Color representation in CNNs: parallelisms with biological vision. In ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision.
Abstract: Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN
[2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions
of them to quantify color tuning properties of artificial neurons to provide a classification of the network population.
We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first
layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT).
|
Hassan Ahmed Sial, Ramon Baldrich, Maria Vanrell, & Dimitris Samaras. (2020). Light Direction and Color Estimation from Single Image with Deep Regression. In London Imaging Conference.
Abstract: We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes.
|
Sagnik Das, Hassan Ahmed Sial, Ke Ma, Ramon Baldrich, Maria Vanrell, & Dimitris Samaras. (2020). Intrinsic Decomposition of Document Images In-the-Wild. In 31st British Machine Vision Conference.
Abstract: Automatic document content processing is affected by artifacts caused by the shape
of the paper, non-uniform and diverse color of lighting conditions. Fully-supervised
methods on real data are impossible due to the large amount of data needed. Hence, the
current state of the art deep learning models are trained on fully or partially synthetic images. However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models. In this paper we tackle these problems with our two main contributions. First, a physically constrained learning-based method that directly estimates document reflectance based on intrinsic image formation which generalizes to challenging illumination conditions. Second, a new dataset that clearly improves previous synthetic ones, by adding a large range of realistic shading and diverse multi-illuminant conditions, uniquely customized to deal with documents in-the-wild. The proposed architecture works in two steps. First, a white balancing module neutralizes the color of the illumination on the input image. Based on the proposed multi-illuminant dataset we achieve a good white-balancing in really difficult conditions. Second, the shading separation module accurately disentangles the shading and paper material in a self-supervised manner where only the synthetic texture is used as a weak training signal (obviating the need for very costly ground truth with disentangled versions of shading and reflectance). The proposed approach leads to significant generalization of document reflectance estimation in real scenes with challenging illumination. We extensively evaluate on the real benchmark datasets available for intrinsic image decomposition and document shadow removal tasks. Our reflectance estimation scheme, when used as a pre-processing step of an OCR pipeline, shows a 21% improvement of character error rate (CER), thus, proving the practical applicability. The data and code will be available at: https://github.com/cvlab-stonybrook/DocIIW.
|
Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2009). Top-Down Color Attention for Object Recognition. In 12th International Conference on Computer Vision (pp. 979–986).
Abstract: Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.
|
Maria Vanrell, Naila Murray, Robert Benavente, C. Alejandro Parraga, Xavier Otazu, & Ramon Baldrich. (2011). Perception Based Representations for Computational Colour. In Alain Trémeau S. T. Raimondo Schettini (Ed.), 3rd International Workshop on Computational Color Imaging (Vol. 6626, pp. 16–30). LNCS. Springer-Verlag.
Abstract: The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.
Keywords: colour perception, induction, naming, psychophysical data, saliency, segmentation
|
Robert Benavente, C. Alejandro Parraga, & Maria Vanrell. (2010). La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas. In 9th Congreso Nacional del Color (92–95).
Abstract: En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto.
Keywords: Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos
|
Javier Vazquez, Maria Vanrell, Ramon Baldrich, & Francesc Tous. (2012). Color Constancy by Category Correlation. TIP - IEEE Transactions on Image Processing, 21(4), 1997–2007.
Abstract: Finding color representations which are stable to illuminant changes is still an open problem in computer vision. Until now most approaches have been based on physical constraints or statistical assumptions derived from the scene, while very little attention has been paid to the effects that selected illuminants have
on the final color image representation. The novelty of this work is to propose
perceptual constraints that are computed on the corrected images. We define the
category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here we choose these colors as the universal color categories related to basic linguistic terms which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy.
|
Marc Serra, Olivier Penacchio, Robert Benavente, & Maria Vanrell. (2012). Names and Shades of Color for Intrinsic Image Estimation. In 25th IEEE Conference on Computer Vision and Pattern Recognition (pp. 278–285). IEEE Xplore.
Abstract: In the last years, intrinsic image decomposition has gained attention. Most of the state-of-the-art methods are based on the assumption that reflectance changes come along with strong image edges. Recently, user intervention in the recovery problem has proved to be a remarkable source of improvement. In this paper, we propose a novel approach that aims to overcome the shortcomings of pure edge-based methods by introducing strong surface descriptors, such as the color-name descriptor which introduces high-level considerations resembling top-down intervention. We also use a second surface descriptor, termed color-shade, which allows us to include physical considerations derived from the image formation model capturing gradual color surface variations. Both color cues are combined by means of a Markov Random Field. The method is quantitatively tested on the MIT ground truth dataset using different error metrics, achieving state-of-the-art performance.
|
Fahad Shahbaz Khan, Muhammad Anwer Rao, Joost Van de Weijer, Andrew Bagdanov, Maria Vanrell, & Antonio Lopez. (2012). Color Attributes for Object Detection. In 25th IEEE Conference on Computer Vision and Pattern Recognition (pp. 3306–3313). IEEE Xplore.
Abstract: State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
Keywords: pedestrian detection
|
Naila Murray, Maria Vanrell, Xavier Otazu, & C. Alejandro Parraga. (2011). Saliency Estimation Using a Non-Parametric Low-Level Vision Model. In IEEE conference on Computer Vision and Pattern Recognition (pp. 433–440).
Abstract: Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.
Keywords: Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms
|