Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2007). A System to Retrieve Text/Symbols from Color Maps using Connected Component and Skeleton Analysis. In J.M. Ogier W. L. J. Llados (Ed.), Seventh IAPR International Workshop on Graphics Recognition (79–78).
|
|
Yasuko Sugito, Trevor Canham, Javier Vazquez, & Marcelo Bertalmio. (2021). A Study of Objective Quality Metrics for HLG-Based HDR/WCG Image Coding. SMPTE - SMPTE Motion Imaging Journal, 53–65.
Abstract: In this work, we study the suitability of high dynamic range, wide color gamut (HDR/WCG) objective quality metrics to assess the perceived deterioration of compressed images encoded using the hybrid log-gamma (HLG) method, which is the standard for HDR television. Several image quality metrics have been developed to deal specifically with HDR content, although in previous work we showed that the best results (i.e., better matches to the opinion of human expert observers) are obtained by an HDR metric that consists simply in applying a given standard dynamic range metric, called visual information fidelity (VIF), directly to HLG-encoded images. However, all these HDR metrics ignore the chroma components for their calculations, that is, they consider only the luminance channel. For this reason, in the current work, we conduct subjective evaluation experiments in a professional setting using compressed HDR/WCG images encoded with HLG and analyze the ability of the best HDR metric to detect perceivable distortions in the chroma components, as well as the suitability of popular color metrics (including ΔITPR , which supports parameters for HLG) to correlate with the opinion scores. Our first contribution is to show that there is a need to consider the chroma components in HDR metrics, as there are color distortions that subjects perceive but that the best HDR metric fails to detect. Our second contribution is the surprising result that VIF, which utilizes only the luminance channel, correlates much better with the subjective evaluation scores than the metrics investigated that do consider the color components.
|
|
Maria Vanrell, & Jordi Vitria. (1993). Mathematical Morphology, Granulometries and Texture Perception..
|
|
Maria Vanrell, Jordi Vitria, & Xavier Roca. (1993). A General Morphological Framework for Perceptual Texture Discrimination based on Granulometries..
|
|
Xavier Otazu, & J. Nuñez. (2001). Algoritmo de Clasificacion no Supervisada Basado en Wavelets..
|
|
O. Fors, Xavier Otazu, & J. Nuñez. (2001). Fusion Mediante Wavelets de Imagenes Spot-pan y del Satelite Tailandes TMSAT..
|
|
Robert Benavente, Ernest Valveny, Jaume Garcia, Agata Lapedriza, Miquel Ferrer, & Gemma Sanchez. (2008). Una experiencia de adaptacion al EEES de las asignaturas de programacion en Ingenieria Informatica.
|
|
Enric Marti, Jordi Rocarias, & Ricardo Toledo. (2008). Caront: gestió flexible de grups d’alumnes en una asignatura i activitats sobre grups. Nova activitat de control.
|
|
Eduard Vazquez, & Maria Vanrell. (2008). Eines per al desenvolupament de competencies de enginyeria en un assignatura de Intel·ligencia Artificial.
|
|
Agata Lapedriza, Jaume Garcia, Ernest Valveny, Robert Benavente, Miquel Ferrer, & Gemma Sanchez. (2008). Una experiencia de aprenentatge basada en projectes en el ambit de la informatica.
|
|
X. Binefa, Jordi Vitria, & Maria Vanrell. (1992). Reconstruccion tridimensional de imagenes Microscopicas..
|
|
Xavier Otazu, Maria Vanrell, & C. Alejandro Parraga. (2008). Multiresolution Wavelet Framework Models Brightness Induction Effects. VR - Vision Research, 733–751.
|
|
Ivet Rafegas, & Maria Vanrell. (2018). Color encoding in biologically-inspired convolutional neural networks. VR - Vision Research, 151, 7–17.
Abstract: Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations.
Keywords: Color coding; Computer vision; Deep learning; Convolutional neural networks
|
|