Home | << 1 2 3 4 5 6 7 8 >> |
Records | |||||
---|---|---|---|---|---|
Author | C. Alejandro Parraga; Olivier Penacchio; Maria Vanrell | ||||
Title | Retinal Filtering Matches Natural Image Statistics at Low Luminance Levels | Type | Journal Article | ||
Year | 2011 | Publication | Perception | Abbreviated Journal | PER |
Volume | 40 | Issue | Pages | 96 | |
Keywords | |||||
Abstract | The assumption that the retina’s main objective is to provide a minimum entropy representation to higher visual areas (ie efficient coding principle) allows to predict retinal filtering in space–time and colour (Atick, 1992 Network 3 213–251). This is achieved by considering the power spectra of natural images (which is proportional to 1/f2) and the suppression of retinal and image noise. However, most studies consider images within a limited range of lighting conditions (eg near noon) whereas the visual system’s spatial filtering depends on light intensity and the spatiochromatic properties of natural scenes depend of the time of the day. Here, we explore whether the dependence of visual spatial filtering on luminance match the changes in power spectrum of natural scenes at different times of the day. Using human cone-activation based naturalistic stimuli (from the Barcelona Calibrated Images Database), we show that for a range of luminance levels, the shape of the retinal CSF reflects the slope of the power spectrum at low spatial frequencies. Accordingly, the retina implements the filtering which best decorrelates the input signal at every luminance level. This result is in line with the body of work that places efficient coding as a guiding neural principle. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ PPV2011 | Serial | 1720 | ||
Permanent link to this record | |||||
Author | Joost Van de Weijer; Robert Benavente; Maria Vanrell; Cordelia Schmid; Ramon Baldrich; Jacob Verbeek; Diane Larlus | ||||
Title | Color Naming | Type | Book Chapter | ||
Year | 2012 | Publication | Color in Computer Vision: Fundamentals and Applications | Abbreviated Journal | |
Volume | Issue | 17 | Pages | 287-317 | |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | John Wiley & Sons, Ltd. | Place of Publication | Editor | Theo Gevers;Arjan Gijsenij;Joost Van de Weijer;Jan-Mark Geusebroek | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ WBV2012 | Serial | 2063 | ||
Permanent link to this record | |||||
Author | Ivet Rafegas; Maria Vanrell | ||||
Title | Color spaces emerging from deep convolutional networks | Type | Conference Article | ||
Year | 2016 | Publication | 24th Color and Imaging Conference | Abbreviated Journal | |
Volume | Issue | Pages | 225-230 | ||
Keywords | |||||
Abstract | Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers. |
||||
Address | San Diego; USA; November 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CIC | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ RaV2016a | Serial | 2894 | ||
Permanent link to this record | |||||
Author | Ivet Rafegas; Maria Vanrell | ||||
Title | Colour Visual Coding in trained Deep Neural Networks | Type | Abstract | ||
Year | 2016 | Publication | European Conference on Visual Perception | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Barcelona; Spain; August 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ RaV2016b | Serial | 2895 | ||
Permanent link to this record | |||||
Author | Ivet Rafegas; Maria Vanrell | ||||
Title | Color representation in CNNs: parallelisms with biological vision | Type | Conference Article | ||
Year | 2017 | Publication | ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN [2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions of them to quantify color tuning properties of artificial neurons to provide a classification of the network population. We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT). |
||||
Address | Venice; Italy; October 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV-MBCC | ||
Notes | CIC; 600.087; 600.051 | Approved | no | ||
Call Number | Admin @ si @ RaV2017 | Serial | 2984 | ||
Permanent link to this record | |||||
Author | Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell; Dimitris Samaras | ||||
Title | Light Direction and Color Estimation from Single Image with Deep Regression | Type | Conference Article | ||
Year | 2020 | Publication | London Imaging Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes. | ||||
Address | Virtual; September 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | LIM | ||
Notes | CIC; 600.118; 600.140; | Approved | no | ||
Call Number | Admin @ si @ SBV2020 | Serial | 3460 | ||
Permanent link to this record | |||||
Author | Sagnik Das; Hassan Ahmed Sial; Ke Ma; Ramon Baldrich; Maria Vanrell; Dimitris Samaras | ||||
Title | Intrinsic Decomposition of Document Images In-the-Wild | Type | Conference Article | ||
Year | 2020 | Publication | 31st British Machine Vision Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Automatic document content processing is affected by artifacts caused by the shape
of the paper, non-uniform and diverse color of lighting conditions. Fully-supervised methods on real data are impossible due to the large amount of data needed. Hence, the current state of the art deep learning models are trained on fully or partially synthetic images. However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models. In this paper we tackle these problems with our two main contributions. First, a physically constrained learning-based method that directly estimates document reflectance based on intrinsic image formation which generalizes to challenging illumination conditions. Second, a new dataset that clearly improves previous synthetic ones, by adding a large range of realistic shading and diverse multi-illuminant conditions, uniquely customized to deal with documents in-the-wild. The proposed architecture works in two steps. First, a white balancing module neutralizes the color of the illumination on the input image. Based on the proposed multi-illuminant dataset we achieve a good white-balancing in really difficult conditions. Second, the shading separation module accurately disentangles the shading and paper material in a self-supervised manner where only the synthetic texture is used as a weak training signal (obviating the need for very costly ground truth with disentangled versions of shading and reflectance). The proposed approach leads to significant generalization of document reflectance estimation in real scenes with challenging illumination. We extensively evaluate on the real benchmark datasets available for intrinsic image decomposition and document shadow removal tasks. Our reflectance estimation scheme, when used as a pre-processing step of an OCR pipeline, shows a 21% improvement of character error rate (CER), thus, proving the practical applicability. The data and code will be available at: https://github.com/cvlab-stonybrook/DocIIW. |
||||
Address | Virtual; September 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | BMVC | ||
Notes | CIC; 600.087; 600.140; 600.118 | Approved | no | ||
Call Number | Admin @ si @ DSM2020 | Serial | 3461 | ||
Permanent link to this record | |||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell | ||||
Title | Top-Down Color Attention for Object Recognition | Type | Conference Article | ||
Year | 2009 | Publication | 12th International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 979 - 986 | ||
Keywords | |||||
Abstract | Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information. | ||||
Address | Kyoto, Japan | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1550-5499 | ISBN | 978-1-4244-4420-5 | Medium | |
Area | Expedition | Conference | ICCV | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ SWV2009 | Serial | 1196 | ||
Permanent link to this record | |||||
Author | Maria Vanrell; Naila Murray; Robert Benavente; C. Alejandro Parraga; Xavier Otazu; Ramon Baldrich | ||||
Title | Perception Based Representations for Computational Colour | Type | Conference Article | ||
Year | 2011 | Publication | 3rd International Workshop on Computational Color Imaging | Abbreviated Journal | |
Volume | 6626 | Issue | Pages | 16-30 | |
Keywords | colour perception, induction, naming, psychophysical data, saliency, segmentation | ||||
Abstract | The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space. | ||||
Address | Milan, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | Springer-Verlag | Place of Publication | Editor | Raimondo Schettini, Shoji Tominaga, Alain Trémeau | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-642-20403-6 | Medium | ||
Area | Expedition | Conference | CCIW | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ VMB2011 | Serial | 1733 | ||
Permanent link to this record | |||||
Author | Robert Benavente; C. Alejandro Parraga; Maria Vanrell | ||||
Title | La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas | Type | Conference Article | ||
Year | 2010 | Publication | 9th Congreso Nacional del Color | Abbreviated Journal | |
Volume | Issue | Pages | 92–95 | ||
Keywords | Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos | ||||
Abstract | En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto. | ||||
Address | Alicante (Spain) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-9717-144-1 | Medium | ||
Area | Expedition | Conference | CNC | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ BPV2010 | Serial | 1327 | ||
Permanent link to this record | |||||
Author | Javier Vazquez; Maria Vanrell; Ramon Baldrich; Francesc Tous | ||||
Title | Color Constancy by Category Correlation | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 21 | Issue | 4 | Pages | 1997-2007 |
Keywords | |||||
Abstract | Finding color representations which are stable to illuminant changes is still an open problem in computer vision. Until now most approaches have been based on physical constraints or statistical assumptions derived from the scene, while very little attention has been paid to the effects that selected illuminants have
on the final color image representation. The novelty of this work is to propose perceptual constraints that are computed on the corrected images. We define the category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here we choose these colors as the universal color categories related to basic linguistic terms which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ VVB2012 | Serial | 1999 | ||
Permanent link to this record | |||||
Author | Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell | ||||
Title | Names and Shades of Color for Intrinsic Image Estimation | Type | Conference Article | ||
Year | 2012 | Publication | 25th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 278-285 | ||
Keywords | |||||
Abstract | In the last years, intrinsic image decomposition has gained attention. Most of the state-of-the-art methods are based on the assumption that reflectance changes come along with strong image edges. Recently, user intervention in the recovery problem has proved to be a remarkable source of improvement. In this paper, we propose a novel approach that aims to overcome the shortcomings of pure edge-based methods by introducing strong surface descriptors, such as the color-name descriptor which introduces high-level considerations resembling top-down intervention. We also use a second surface descriptor, termed color-shade, which allows us to include physical considerations derived from the image formation model capturing gradual color surface variations. Both color cues are combined by means of a Markov Random Field. The method is quantitatively tested on the MIT ground truth dataset using different error metrics, achieving state-of-the-art performance. | ||||
Address | Providence, Rhode Island | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE Xplore | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4673-1226-4 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ SPB2012 | Serial | 2026 | ||
Permanent link to this record | |||||
Author | Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez | ||||
Title | Color Attributes for Object Detection | Type | Conference Article | ||
Year | 2012 | Publication | 25th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3306-3313 | ||
Keywords | pedestrian detection | ||||
Abstract | State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape. In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe- art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods. |
||||
Address | Providence; Rhode Island; USA; | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE Xplore | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4673-1226-4 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS; CIC; | Approved | no | ||
Call Number | Admin @ si @ KRW2012 | Serial | 1935 | ||
Permanent link to this record | |||||
Author | Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga | ||||
Title | Saliency Estimation Using a Non-Parametric Low-Level Vision Model | Type | Conference Article | ||
Year | 2011 | Publication | IEEE conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 433-440 | ||
Keywords | Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms | ||||
Abstract | Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks. | ||||
Address | Colorado Springs | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4577-0394-2 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ MVO2011 | Serial | 1757 | ||
Permanent link to this record |