Ivet Rafegas, Maria Vanrell, Luis A Alexandre, & G. Arias. (2020). Understanding trained CNNs by indexing neuron selectivity. PRL - Pattern Recognition Letters, 136, 318–325.
Abstract: The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful.
|
Bojana Gajic, Eduard Vazquez, & Ramon Baldrich. (2017). Evaluation of Deep Image Descriptors for Texture Retrieval. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) (pp. 251–257).
Abstract: The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures.
Keywords: Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation
|
Maria Vanrell, Naila Murray, Robert Benavente, C. Alejandro Parraga, Xavier Otazu, & Ramon Baldrich. (2011). Perception Based Representations for Computational Colour. In Alain Trémeau S. T. Raimondo Schettini (Ed.), 3rd International Workshop on Computational Color Imaging (Vol. 6626, pp. 16–30). LNCS. Springer-Verlag.
Abstract: The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.
Keywords: colour perception, induction, naming, psychophysical data, saliency, segmentation
|
Graham D. Finlayson, Javier Vazquez, & Fufu Fang. (2021). The Discrete Cosine Maximum Ignorance Assumption. In 29th Color and Imaging Conference (pp. 13–18).
Abstract: the performance of colour correction algorithms are dependent on the reflectance sets used. Sometimes, when the testing reflectance set is changed the ranking of colour correction algorithms also changes. To remove dependence on dataset we can
make assumptions about the set of all possible reflectances. In the Maximum Ignorance with Positivity (MIP) assumption we assume that all reflectances with per wavelength values between 0 and 1 are equally likely. A weakness in the MIP is that it fails to take into account the correlation of reflectance functions between
wavelengths (many of the assumed reflectances are, in reality, not possible).
In this paper, we take the view that the maximum ignorance assumption has merit but, hitherto it has been calculated with respect to the wrong coordinate basis. Here, we propose the Discrete Cosine Maximum Ignorance assumption (DCMI), where
all reflectances that have coordinates between max and min bounds in the Discrete Cosine Basis coordinate system are equally likely.
Here, the correlation between wavelengths is encoded and this results in the set of all plausible reflectances ’looking like’ typical reflectances that occur in nature. This said the DCMI model is also a superset of all measured reflectance sets.
Experiments show that, in colour correction, adopting the DCMI results in similar colour correction performance as using a particular reflectance set.
|
Olivier Penacchio, & C. Alejandro Parraga. (2011). What is the best criterion for an efficient design of retinal photoreceptor mosaics? PER - Perception, 40, 197.
Abstract: The proportions of L, M and S photoreceptors in the primate retina are arguably determined by evolutionary pressure and the statistics of the visual environment. Two information theory-based approaches have been recently proposed for explaining the asymmetrical spatial densities of photoreceptors in humans. In the first approach Garrigan et al (2010 PLoS ONE 6 e1000677), a model for computing the information transmitted by cone arrays which considers the differential blurring produced by the long-wavelength accommodation of the eye’s lens is proposed. Their results explain the sparsity of S-cones but the optimum depends weakly on the L:M cone ratio. In the second approach (Penacchio et al, 2010 Perception 39 ECVP Supplement, 101), we show that human cone arrays make the visual representation scale-invariant, allowing the total entropy of the signal to be preserved while decreasing individual neurons’ entropy in further retinotopic representations. This criterion provides a thorough description of the distribution of L:M cone ratios and does not depend on differential blurring of the signal by the lens. Here, we investigate the similarities and differences of both approaches when applied to the same database. Our results support a 2-criteria optimization in the space of cone ratios whose components are arguably important and mostly unrelated.
[This work was partially funded by projects TIN2010-21771-C02-1 and Consolider-Ingenio 2010-CSD2007-00018 from the Spanish MICINN. CAP was funded by grant RYC-2007-00484]
|
Eduard Vazquez, Ramon Baldrich, Joost Van de Weijer, & Maria Vanrell. (2011). Describing Reflectances for Colour Segmentation Robust to Shadows, Highlights and Textures. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5), 917–930.
Abstract: The segmentation of a single material reflectance is a challenging problem due to the considerable variation in image measurements caused by the geometry of the object, shadows, and specularities. The combination of these effects has been modeled by the dichromatic reflection model. However, the application of the model to real-world images is limited due to unknown acquisition parameters and compression artifacts. In this paper, we present a robust model for the shape of a single material reflectance in histogram space. The method is based on a multilocal creaseness analysis of the histogram which results in a set of ridges representing the material reflectances. The segmentation method derived from these ridges is robust to both shadow, shading and specularities, and texture in real-world images. We further complete the method by incorporating prior knowledge from image statistics, and incorporate spatial coherence by using multiscale color contrast information. Results obtained show that our method clearly outperforms state-of-the-art segmentation methods on a widely used segmentation benchmark, having as a main characteristic its excellent performance in the presence of shadows and highlights at low computational cost.
|
C. Alejandro Parraga. (2014). Color Vision, Computational Methods for. In Dieter Jaeger, & Ranu Jung (Eds.), Encyclopedia of Computational Neuroscience (pp. 1–11). Springer-Verlag Berlin Heidelberg.
Abstract: The study of color vision has been aided by a whole battery of computational methods that attempt to describe the mechanisms that lead to our perception of colors in terms of the information-processing properties of the visual system. Their scope is highly interdisciplinary, linking apparently dissimilar disciplines such as mathematics, physics, computer science, neuroscience, cognitive science, and psychology. Since the sensation of color is a feature of our brains, computational approaches usually include biological features of neural systems in their descriptions, from retinal light-receptor interaction to subcortical color opponency, cortical signal decoding, and color categorization. They produce hypotheses that are usually tested by behavioral or psychophysical experiments.
Keywords: Color computational vision; Computational neuroscience of color
|
Jordi Roca, C. Alejandro Parraga, & Maria Vanrell. (2011). Categorical Focal Colours are Structurally Invariant Under Illuminant Changes. In European Conference on Visual Perception (196). Perception 40.
Abstract: The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.
|
Graham D. Finlayson, Javier Vazquez, Sabine Süsstrunk, & Maria Vanrell. (2012). Spectral sharpening by spherical sampling. JOSA A - Journal of the Optical Society of America A, 29(7), 1199–1210.
Abstract: There are many works in color that assume illumination change can be modeled by multiplying sensor responses by individual scaling factors. The early research in this area is sometimes grouped under the heading “von Kries adaptation”: the scaling factors are applied to the cone responses. In more recent studies, both in psychophysics and in computational analysis, it has been proposed that scaling factors should be applied to linear combinations of the cones that have narrower support: they should be applied to the so-called “sharp sensors.” In this paper, we generalize the computational approach to spectral sharpening in three important ways. First, we introduce spherical sampling as a tool that allows us to enumerate in a principled way all linear combinations of the cones. This allows us to, second, find the optimal sharp sensors that minimize a variety of error measures including CIE Delta E (previous work on spectral sharpening minimized RMS) and color ratio stability. Lastly, we extend the spherical sampling paradigm to the multispectral case. Here the objective is to model the interaction of light and surface in terms of color signal spectra. Spherical sampling is shown to improve on the state of the art.
|
Naila Murray, Sandra Skaff, Luca Marchesotti, & Florent Perronnin. (2011). Towards Automatic Concept Transfer. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering (167.176). ACM Press.
Abstract: This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
Keywords: chromatic modeling, color concepts, color transfer, concept transfer
|
Naila Murray, Sandra Skaff, Luca Marchesotti, & Florent Perronnin. (2012). Towards automatic and flexible concept transfer. CG - Computers and Graphics, 36(6), 622–634.
Abstract: This paper introduces a novel approach to automatic, yet flexible, image concepttransfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The presented method modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This method is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. Our framework is flexible for two reasons. First, the user may select one of two modalities to map input image chromaticities to target concept chromaticities depending on the level of photo-realism required. Second, the user may adjust the intensity level of the concepttransfer to his/her liking with a single parameter. The proposed method uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. Results show that our approach yields transferred images which effectively represent concepts as confirmed by a user study.
|
Francesc Tous, Agnes Borras, Robert Benavente, Ramon Baldrich, Maria Vanrell, & Josep Llados. (2002). Textual Descriptors for browsing people by visual appearence. In 5è. Congrés Català d’Intel·ligència Artificial CCIA.
Abstract: This paper presents a first approach to build colour and structural descriptors for information retrieval on a people database. Queries are formulated in terms of their appearance that allows to seek people wearing specific clothes of a given colour name or texture. Descriptors are automatically computed by following three essential steps. A colour naming labelling from pixel properties. A region seg- mentation step based on colour properties of pixels combined with edge information. And a high level step that models the region arrangements in order to build clothes structure. Results are tested on large set of images from real scenes taken at the entrance desk of a building.
Keywords: Image retrieval, textual descriptors, colour naming, colour normalization, graph matching.
|
Francesc Tous, Agnes Borras, Robert Benavente, Ramon Baldrich, Maria Vanrell, & Josep Llados. (2002). Textual Descriptions for Browsing People by Visual Apperance. In Lecture Notes in Artificial Intelligence (Vol. 2504, pp. 419–429). Springer Verlag.
Abstract: This paper presents a first approach to build colour and structural descriptors for information retrieval on a people database. Queries are formulated in terms of their appearance that allows to seek people wearing specific clothes of a given colour name or texture. Descriptors are automatically computed by following three essential steps. A colour naming labelling from pixel properties. A region seg- mentation step based on colour properties of pixels combined with edge information. And a high level step that models the region arrangements in order to build clothes structure. Results are tested on large set of images from real scenes taken at the entrance desk of a building
|
David Geronimo, Joan Serrat, Antonio Lopez, & Ramon Baldrich. (2013). Traffic sign recognition for computer vision project-based learning. T-EDUC - IEEE Transactions on Education, 56(3), 364–371.
Abstract: This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback.
Keywords: traffic signs
|
Yawei Li, Yulun Zhang, Radu Timofte, Luc Van Gool, Zhijun Tu, Kunpeng Du, et al. (2023). NTIRE 2023 challenge on image denoising: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 1904–1920).
Abstract: This paper reviews the NTIRE 2023 challenge on image denoising (σ = 50) with a focus on the proposed solutions and results. The aim is to obtain a network design capable to produce high-quality results with the best performance measured by PSNR for image denoising. Independent additive white Gaussian noise (AWGN) is assumed and the noise level is 50. The challenge had 225 registered participants, and 16 teams made valid submissions. They gauge the state-of-the-art for image denoising.
|
Agnes Borras, Francesc Tous, Josep Llados, & Maria Vanrell. (2003). High-Level Clothes Description Based on Color-Texture and Structural Features. In Lecture Notes in Computer Science (Vol. 2652, 108–116).
Abstract: This work is a part of a surveillance system where content- based image retrieval is done in terms of people appearance. Given an image of a person, our work provides an automatic description of his clothing according to the colour, texture and structural composition of its garments. We present a two-stage process composed by image segmentation and a region-based interpretation. We segment an image by modelling it due to an attributed graph and applying a hybrid method that follows a split-and-merge strategy. We propose the interpretation of five cloth combinations that are modelled in a graph structure in terms of region features. The interpretation is viewed as a graph matching with an associated cost between the segmentation and the cloth models. Fi- nally, we have tested the process with a ground-truth of one hundred images.
|
Jordi Roca. (2012). Constancy and inconstancy in categorical colour perception (Maria Vanrell, & C. Alejandro Parraga, Eds.). Ph.D. thesis, , .
Abstract: To recognise objects is perhaps the most important task an autonomous system, either biological or artificial needs to perform. In the context of human vision, this is partly achieved by recognizing the colour of surfaces despite changes in the wavelength distribution of the illumination, a property called colour constancy. Correct surface colour recognition may be adequately accomplished by colour category matching without the need to match colours precisely, therefore categorical colour constancy is likely to play an important role for object identification to be successful. The main aim of this work is to study the relationship between colour constancy and categorical colour perception. Previous studies of colour constancy have shown the influence of factors such the spatio-chromatic properties of the background, individual observer's performance, semantics, etc. However there is very little systematic study of these influences. To this end, we developed a new approach to colour constancy which includes both individual observers' categorical perception, the categorical structure of the background, and their interrelations resulting in a more comprehensive characterization of the phenomenon. In our study, we first developed a new method to analyse the categorical structure of 3D colour space, which allowed us to characterize individual categorical colour perception as well as quantify inter-individual variations in terms of shape and centroid location of 3D categorical regions. Second, we developed a new colour constancy paradigm, termed chromatic setting, which allows measuring the precise location of nine categorically-relevant points in colour space under immersive illumination. Additionally, we derived from these measurements a new colour constancy index which takes into account the magnitude and orientation of the chromatic shift, memory effects and the interrelations among colours and a model of colour naming tuned to each observer/adaptation state. Our results lead to the following conclusions: (1) There exists large inter-individual variations in the categorical structure of colour space, and thus colour naming ability varies significantly but this is not well predicted by low-level chromatic discrimination ability; (2) Analysis of the average colour naming space suggested the need for an additional three basic colour terms (turquoise, lilac and lime) for optimal colour communication; (3) Chromatic setting improved the precision of more complex linear colour constancy models and suggested that mechanisms other than cone gain might be best suited to explain colour constancy; (4) The categorical structure of colour space is broadly stable under illuminant changes for categorically balanced backgrounds; (5) Categorical inconstancy exists for categorically unbalanced backgrounds thus indicating that categorical information perceived in the initial stages of adaptation may constrain further categorical perception.
|
Xavier Otazu. (2012). Perceptual tone-mapping operator based on multiresolution contrast decomposition. In Perception (Vol. 41, 86).
Abstract: Tone-mapping operators (TMO) are used to display high dynamic range(HDR) images in low dynamic range (LDR) displays. Many computational and biologically inspired approaches have been used in the literature, being many of them based on multiresolution decompositions. In this work, a simple two stage model for TMO is presented. The first stage is a novel multiresolution contrast decomposition, which is inspired in a pyramidal contrast decomposition (Peli, 1990 Journal of the Optical Society of America7(10), 2032-2040).
This novel multiresolution decomposition represents the Michelson contrast of the image at different spatial scales. This multiresolution contrast representation, applied on the intensity channel of an opponent colour decomposition, is processed by a non-linear saturating model of V1 neurons (Albrecht et al, 2002 Journal ofNeurophysiology 88(2) 888-913). This saturation model depends on the visual frequency, and it has been modified in order to include information from the extended Contrast Sensitivity Function (e-CSF) (Otazu et al, 2010 Journal ofVision10(12) 5).
A set of HDR images in Radiance RGBE format (from CIS HDR Photographic Survey and Greg Ward database) have been used to test the model, obtaining a set of LDR images. The resulting LDR images do not show the usual halo or color modification artifacts.
|