Javier Vazquez, Maria Vanrell, & Robert Benavente. (2010). Color names as a constraint for Computer Vision problems. In Proceedings of The CREATE 2010 Conference (324–328).
Abstract: Computer Vision Problems are usually ill-posed. Constraining de gamut of possible solutions is then a necessary step. Many constrains for different problems have been developed during years. In this paper, we present a different way of constraining some of these problems: the use of color names. In particular, we will focus on segmentation, representation ans constancy.
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2010). Who Painted this Painting? In Proceedings of The CREATE 2010 Conference (329–333).
|
|
Robert Benavente, M.C. Olive, Maria Vanrell, & Ramon Baldrich. (1999). Colour Perception: A Simple Method for Colour Naming..
|
|
Eduard Vazquez, Ramon Baldrich, Javier Vazquez, & Maria Vanrell. (2007). Topological histogram reduction towards colour segmentation. In 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:55–62.
|
|
F. Lopez, J.M. Valiente, Ramon Baldrich, & Maria Vanrell. (2005). Fast surface grading using color statistics in the CIELab space. In LNCS 1: 666–673.
|
|
Francesc Tous, Maria Vanrell, & Ramon Baldrich. (2005). Relaxed Grey-World: Computational Colour Constancy by Surface Matching. In Pattern Recognition and Image Analysis (IbPRIA 2005), LNCS 3522:192–199.
|
|
Robert Benavente, & Maria Vanrell. (2001). A colour naming experiment.
|
|
Maria Vanrell. (1997). Exploring the space of behaviour of a texture perception algorithm.
|
|
Anna Salvatella, & Maria Vanrell. (2002). Towards a texture representation database.
|
|
Marc Serra, Olivier Penacchio, Robert Benavente, Maria Vanrell, & Dimitris Samaras. (2014). The Photometry of Intrinsic Images. In 27th IEEE Conference on Computer Vision and Pattern Recognition (pp. 1494–1501).
Abstract: Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images.
|
|
Naila Murray, Maria Vanrell, Xavier Otazu, & C. Alejandro Parraga. (2011). Saliency Estimation Using a Non-Parametric Low-Level Vision Model. In IEEE conference on Computer Vision and Pattern Recognition (pp. 433–440).
Abstract: Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.
Keywords: Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms
|
|
Felipe Lumbreras, Ramon Baldrich, Maria Vanrell, Joan Serrat, & Juan J. Villanueva. (1999). Multiresolution colour texture representations for tile classification.
|
|
Xavier Roca, Jordi Vitria, Maria Vanrell, & Juan J. Villanueva. (1999). Visual behaviours for binocular navigation with autonomous systems..
|
|
Eduard Vazquez, & Maria Vanrell. (2008). Eines per al desenvolupament de competencies de enginyeria en un assignatura de Intel·ligencia Artificial.
|
|