|
Javier Vazquez, Maria Vanrell, Ramon Baldrich, & Francesc Tous. (2012). Color Constancy by Category Correlation. TIP - IEEE Transactions on Image Processing, 21(4), 1997–2007.
Abstract: Finding color representations which are stable to illuminant changes is still an open problem in computer vision. Until now most approaches have been based on physical constraints or statistical assumptions derived from the scene, while very little attention has been paid to the effects that selected illuminants have
on the final color image representation. The novelty of this work is to propose
perceptual constraints that are computed on the corrected images. We define the
category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here we choose these colors as the universal color categories related to basic linguistic terms which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy.
|
|
|
Javier Vazquez, J. Kevin O'Regan, Maria Vanrell, & Graham D. Finlayson. (2012). A new spectrally sharpened basis to predict colour naming, unique hues, and hue cancellation. VSS - Journal of Vision, 12(6 (7)), 1–14.
Abstract: When light is reflected off a surface, there is a linear relation between the three human photoreceptor responses to the incoming light and the three photoreceptor responses to the reflected light. Different colored surfaces have different linear relations. Recently, Philipona and O'Regan (2006) showed that when this relation is singular in a mathematical sense, then the surface is perceived as having a highly nameable color. Furthermore, white light reflected by that surface is perceived as corresponding precisely to one of the four psychophysically measured unique hues. However, Philipona and O'Regan's approach seems unrelated to classical psychophysical models of color constancy. In this paper we make this link. We begin by transforming cone sensors to spectrally sharpened counterparts. In sharp color space, illumination change can be modeled by simple von Kries type scalings of response values within each of the spectrally sharpened response channels. In this space, Philipona and O'Regan's linear relation is captured by a simple Land-type color designator defined by dividing reflected light by incident light. This link between Philipona and O'Regan's theory and Land's notion of color designator gives the model biological plausibility. We then show that Philipona and O'Regan's singular surfaces are surfaces which are very close to activating only one or only two of such newly defined spectrally sharpened sensors, instead of the usual three. Closeness to zero is quantified in a new simplified measure of singularity which is also shown to relate to the chromaticness of colors. As in Philipona and O'Regan's original work, our new theory accounts for a large variety of psychophysical color data.
|
|
|
Javier Vazquez, Graham D. Finlayson, & Luis Herranz. (2024). Improving the perception of low-light enhanced images. Optics Express, 32(4), 5174–5190.
Abstract: Improving images captured under low-light conditions has become an important topic in computational color imaging, as it has a wide range of applications. Most current methods are either based on handcrafted features or on end-to-end training of deep neural networks that mostly focus on minimizing some distortion metric —such as PSNR or SSIM— on a set of training images. However, the minimization of distortion metrics does not mean that the results are optimal in terms of perception (i.e. perceptual quality). As an example, the perception-distortion trade-off states that, close to the optimal results, improving distortion results in worsening perception. This means that current low-light image enhancement methods —that focus on distortion minimization— cannot be optimal in the sense of obtaining a good image in terms of perception errors. In this paper, we propose a post-processing approach in which, given the original low-light image and the result of a specific method, we are able to obtain a result that resembles as much as possible that of the original method, but, at the same time, giving an improvement in the perception of the final image. More in detail, our method follows the hypothesis that in order to minimally modify the perception of an input image, any modification should be a combination of a local change in the shading across a scene and a global change in illumination color. We demonstrate the ability of our method quantitatively using perceptual blind image metrics such as BRISQUE, NIQE, or UNIQUE, and through user preference tests.
|
|
|
Javier Vazquez, C. Alejandro Parraga, Maria Vanrell, & Ramon Baldrich. (2009). Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset. Journal of Imaging Science and Technology, 53(3), 031105–9.
Abstract: The estimation of the illuminant of a scene from a digital image has been the goal of a large amount of research in computer vision. Color constancy algorithms have dealt with this problem by defining different heuristics to select a unique solution from within the feasible set. The performance of these algorithms has shown that there is still a long way to go to globally solve this problem as a preliminary step in computer vision. In general, performance evaluation has been done by comparing the angular error between the estimated chromaticity and the chromaticity of a canonical illuminant, which is highly dependent on the image dataset. Recently, some workers have used high-level constraints to estimate illuminants; in this case selection is based on increasing the performance on the subsequent steps of the systems. In this paper we propose a new performance measure, the perceptual angular error. It evaluates the performance of a color constancy algorithm according to the perceptual preferences of humans, or naturalness (instead of the actual optimal solution) and is independent of the visual task. We show the results of a new psychophysical experiment comparing solutions from three different color constancy algorithms. Our results show that in more than a half of the judgments the preferred solution is not the one closest to the optimal solution. Our experiments were performed on a new dataset of images acquired with a calibrated camera with an attached neutral grey sphere, which better copes with the illuminant variations of the scene.
|
|
|
Javier Vazquez, C. Alejandro Parraga, & Maria Vanrell. (2009). Ordinal pairwise method for natural images comparison. PER - Perception, 38, 180.
Abstract: 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
|
|
|
Javier Selva, Anders S. Johansen, Sergio Escalera, Kamal Nasrollahi, Thomas B. Moeslund, & Albert Clapes. (2023). Video transformers: A survey. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(11), 12922–12943.
Abstract: Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity.
Keywords: Artificial Intelligence; Computer Vision; Self-Attention; Transformers; Video Representations
|
|
|
Javier Marin, & Sergio Escalera. (2021). SSSGAN: Satellite Style and Structure Generative Adversarial Networks. Remote Sensing, 13(19), 3984.
Abstract: This work presents Satellite Style and Structure Generative Adversarial Network (SSGAN), a generative model of high resolution satellite imagery to support image segmentation. Based on spatially adaptive denormalization modules (SPADE) that modulate the activations with respect to segmentation map structure, in addition to global descriptor vectors that capture the semantic information in a vector with respect to Open Street Maps (OSM) classes, this model is able to produce
consistent aerial imagery. By decoupling the generation of aerial images into a structure map and a carefully defined style vector, we were able to improve the realism and geodiversity of the synthesis with respect to the state-of-the-art baseline. Therefore, the proposed model allows us to control the generation not only with respect to the desired structure, but also with respect to a geographic area.
|
|
|
Javier Marin, David Vazquez, Antonio Lopez, Jaume Amores, & Ludmila I. Kuncheva. (2014). Occlusion handling via random subspace classifiers for human detection. TSMCB - IEEE Transactions on Systems, Man, and Cybernetics (Part B), 44(3), 342–354.
Abstract: This paper describes a general method to address partial occlusions for human detection in still images. The Random Subspace Method (RSM) is chosen for building a classifier ensemble robust against partial occlusions. The component classifiers are chosen on the basis of their individual and combined performance. The main contribution of this work lies in our approach’s capability to improve the detection rate when partial occlusions are present without compromising the detection performance on non occluded data. In contrast to many recent approaches, we propose a method which does not require manual labelling of body parts, defining any semantic spatial components, or using additional data coming from motion or stereo. Moreover, the method can be easily extended to other object classes. The experiments are performed on three large datasets: the INRIA person dataset, the Daimler Multicue dataset, and a new challenging dataset, called PobleSec, in which a considerable number of targets are partially occluded. The different approaches are evaluated at the classification and detection levels for both partially occluded and non-occluded data. The experimental results show that our detector outperforms state-of-the-art approaches in the presence of partial occlusions, while offering performance and reliability similar to those of the holistic approach on non-occluded data. The datasets used in our experiments have been made publicly available for benchmarking purposes
Keywords: Pedestriand Detection; occlusion handling
|
|
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2013). Embedding of Graphs with Discrete Attributes Via Label Frequencies. IJPRAI - International Journal of Pattern Recognition and Artificial Intelligence, 27(3), 1360002–1360029.
Abstract: Graph-based representations of patterns are very flexible and powerful, but they are not easily processed due to the lack of learning algorithms in the domain of graphs. Embedding a graph into a vector space solves this problem since graphs are turned into feature vectors and thus all the statistical learning machinery becomes available for graph input patterns. In this work we present a new way of embedding discrete attributed graphs into vector spaces using node and edge label frequencies. The methodology is experimentally tested on graph classification problems, using patterns of different nature, and it is shown to be competitive to state-of-the-art classification algorithms for graphs, while being computationally much more efficient.
Keywords: Discrete attributed graphs; graph embedding; graph classification
|
|
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2012). Graph Embedding in Vector Spaces by Node Attribute Statistics. PR - Pattern Recognition, 45(9), 3072–3083.
Abstract: Graph-based representations are of broad use and applicability in pattern recognition. They exhibit, however, a major drawback with regards to the processing tools that are available in their domain. Graphembedding into vectorspaces is a growing field among the structural pattern recognition community which aims at providing a feature vector representation for every graph, and thus enables classical statistical learning machinery to be used on graph-based input patterns. In this work, we propose a novel embedding methodology for graphs with continuous nodeattributes and unattributed edges. The approach presented in this paper is based on statistics of the node labels and the edges between them, based on their similarity to a set of representatives. We specifically deal with an important issue of this methodology, namely, the selection of a suitable set of representatives. In an experimental evaluation, we empirically show the advantages of this novel approach in the context of different classification problems using several databases of graphs.
Keywords: Structural pattern recognition; Graph embedding; Data clustering; Graph classification
|
|
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2012). Feature Selection on Node Statistics Based Embedding of Graphs. PRL - Pattern Recognition Letters, 33(15), 1980–1990.
Abstract: Representing a graph with a feature vector is a common way of making statistical machine learning algorithms applicable to the domain of graphs. Such a transition from graphs to vectors is known as graphembedding. A key issue in graphembedding is to select a proper set of features in order to make the vectorial representation of graphs as strong and discriminative as possible. In this article, we propose features that are constructed out of frequencies of node label representatives. We first build a large set of features and then select the most discriminative ones according to different ranking criteria and feature transformation algorithms. On different classification tasks, we experimentally show that only a small significant subset of these features is needed to achieve the same classification rates as competing to state-of-the-art methods.
Keywords: Structural pattern recognition; Graph embedding; Feature ranking; PCA; Graph classification
|
|
|
Jaume Garcia, Debora Gil, Sandra Pujades, & Francesc Carreras. (2008). Valoracion de la Funcion del Ventriculo Izquierdo mediante Modelos Regionales Hiperparametricos. Revista Española de Cardiologia, 61(3), 79.
Abstract: La mayoría de la enfermedades cardiovasculares afectan a las propiedades contráctiles de la banda ventricular helicoidal. Esto se refleja en una variación del comportamiento normal de la función ventricular. Parámetros locales tales como los strains, o la deformación experimentada por el tejido, son indicadores capaces de detectar anomalías funcionales en territorios específicos. A menudo, dichos parámetros son considerados de forma separada. En este trabajo presentamos un marco computacional (el Dominio Paramétrico Normalizado, DPN) que permite integrarlos en hiperparámetros funcionales y estudiar sus rangos de normalidad. Dichos rangos permiten valorar de forma objetiva la función regional de cualquier nuevo paciente. Para ello, consideramos secuencias de resonancia magnética etiquetada a nivel basal, medio y apical. Los hiperparámetros se obtienen a partir del movimiento intramural del VI estimado mediante el método Harmonic Phase Flow. El DPN se define a partir de en una parametrización del Ventrículo Izquierdo (VI) en sus coordenadas radiales y circunferencial basada en criterios anatómicos. El paso de los hiperparámetros al DPN hace posible la comparación entre distintos pacientes. Los rangos de normalidad se definen mediante análisis estadístico de valores de voluntarios sanos en 45 regiones del DPN a lo largo de 9 fases sistólicas. Se ha usado un conjunto de 19 (14 H; E: 30.7±7.5) voluntarios sanos para crear los patrones de normalidad y se han validado usando 2 controles sanos y 3 pacientes afectados de contractilidad global reducida. Para los controles los resultados regionales se han ajustado dentro de la normalidad, mientras que para los pacientes se han obtenido valores anormales en las zonas descritas, localizando y cuantificando así el diagnóstico empírico.
|
|
|
Jaume Garcia, Debora Gil, Sandra Pujades, & Francesc Carreras. (2008). A Variational Framework for Assessment of the Left Ventricle Motion. International Journal Mathematical Modelling of Natural Phenomena, 3(6), 76–100.
Abstract: Impairment of left ventricular contractility due to cardiovascular diseases is reflected in left ventricle (LV) motion patterns. An abnormal change of torsion or long axis shortening LV values can help with the diagnosis and follow-up of LV dysfunction. Tagged Magnetic Resonance (TMR) is a widely spread medical imaging modality that allows estimation of the myocardial tissue local deformation. In this work, we introduce a novel variational framework for extracting the left ventricle dynamics from TMR sequences. A bi-dimensional representation space of TMR images given by Gabor filter banks is defined. Tracking of the phases of the Gabor response is combined using a variational framework which regularizes the deformation field just at areas where the Gabor amplitude drops, while restoring the underlying motion otherwise. The clinical applicability of the proposed method is illustrated by extracting normality models of the ventricular torsion from 19 healthy subjects.
Keywords: Key words: Left Ventricle Dynamics, Ventricular Torsion, Tagged Magnetic Resonance, Motion Tracking, Variational Framework, Gabor Transform.
|
|
|
Jaume Garcia, Debora Gil, Luis Badiella, Aura Hernandez-Sabate, Francesc Carreras, Sandra Pujades, et al. (2010). A Normalized Framework for the Design of Feature Spaces Assessing the Left Ventricular Function. TMI - IEEE Transactions on Medical Imaging, 29(3), 733–745.
Abstract: A through description of the left ventricle functionality requires combining complementary regional scores. A main limitation is the lack of multiparametric normality models oriented to the assessment of regional wall motion abnormalities (RWMA). This paper covers two main topics involved in RWMA assessment. We propose a general framework allowing the fusion and comparison across subjects of different regional scores. Our framework is used to explore which combination of regional scores (including 2-D motion and strains) is better suited for RWMA detection. Our statistical analysis indicates that for a proper (within interobserver variability) identification of RWMA, models should consider motion and extreme strains.
|
|
|
Jaume Amores, & Petia Radeva. (2005). Registration and Retrieval of Highly Elastic Bodies using Contextual Information. PRL - Pattern Recognition Letters, 26(11), 1720–1731.
|
|