Olivier Penacchio. (2011). Mixed Hodge Structures and Equivariant Sheaves on the Projective Plane. MN - Mathematische Nachrichten, 284(4), 526–542.
Abstract: We describe an equivalence of categories between the category of mixed Hodge structures and a category of equivariant vector bundles on a toric model of the complex projective plane which verify some semistability condition. We then apply this correspondence to define an invariant which generalizes the notion of R-split mixed Hodge structure and give calculations for the first group of cohomology of possibly non smooth or non-complete curves of genus 0 and 1. Finally, we describe some extension groups of mixed Hodge structures in terms of equivariant extensions of coherent sheaves. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Keywords: Mixed Hodge structures, equivariant sheaves, MSC (2010) Primary: 14C30, Secondary: 14F05, 14M25
|
|
Xavier Otazu, M. Ribo, M. Peracaula, J.M. Paredes, & J. Nuñez. (2002). Detection of superimposed periodic signals using wavelets. Monthly Notices of the Royal Astronomical Society, 333, 2: 365–372 (IF: 4.671).
|
|
Xavier Otazu, M. Ribo, J.M. Paredes, M. Peracaula, & J. Nuñez. (2004). Multiresolution approach for period determination on unevenly sampled data. Monthly Notices of the Royal Astronomical Society, 351:251–219 (IF: 5.238).
|
|
Joost Van de Weijer, Fahad Shahbaz Khan, & Marc Masana. (2013). Interactive Visual and Semantic Image Retrieval. In Angel Sappa, & Jordi Vitria (Eds.), Multimodal Interaction in Image and Video Applications (Vol. 48, pp. 31–35). Springer Berlin Heidelberg.
Abstract: One direct consequence of recent advances in digital visual data generation and the direct availability of this information through the World-Wide Web, is a urgent demand for efficient image retrieval systems. The objective of image retrieval is to allow users to efficiently browse through this abundance of images. Due to the non-expert nature of the majority of the internet users, such systems should be user friendly, and therefore avoid complex user interfaces. In this chapter we investigate how high-level information provided by recently developed object recognition techniques can improve interactive image retrieval. Wel apply a bagof- word based image representation method to automatically classify images in a number of categories. These additional labels are then applied to improve the image retrieval system. Next to these high-level semantic labels, we also apply a low-level image description to describe the composition and color scheme of the scene. Both descriptions are incorporated in a user feedback image retrieval setting. The main objective is to show that automatic labeling of images with semantic labels can improve image retrieval results.
|
|
Abel Gonzalez-Garcia, Robert Benavente, Olivier Penacchio, Javier Vazquez, Maria Vanrell, & C. Alejandro Parraga. (2013). Coloresia: An Interactive Colour Perception Device for the Visually Impaired. In Multimodal Interaction in Image and Video Applications (Vol. 48, pp. 47–66). Springer Berlin Heidelberg.
Abstract: A significative percentage of the human population suffer from impairments in their capacity to distinguish or even see colours. For them, everyday tasks like navigating through a train or metro network map becomes demanding. We present a novel technique for extracting colour information from everyday natural stimuli and presenting it to visually impaired users as pleasant, non-invasive sound. This technique was implemented inside a Personal Digital Assistant (PDA) portable device. In this implementation, colour information is extracted from the input image and categorised according to how human observers segment the colour space. This information is subsequently converted into sound and sent to the user via speakers or headphones. In the original implementation, it is possible for the user to send its feedback to reconfigure the system, however several features such as these were not implemented because the current technology is limited.We are confident that the full implementation will be possible in the near future as PDA technology improves.
|
|
Miquel Ferrer, Robert Benavente, Ernest Valveny, J. Garcia, Agata Lapedriza, & Gemma Sanchez. (2008). Aprendizaje Cooperativo Aplicado a la Docencia de las Asignaturas de Programacion en Ingenieria Informatica.
|
|
Daniel Ponsa, Robert Benavente, Felipe Lumbreras, Judit Martinez, & Xavier Roca. (2003). Quality control of safety belts by machine vision inspection for real-time production. Optical Engineering (IF: 0.877), 42(4), 1114–1120.
|
|
Noha Elfiky, Fahad Shahbaz Khan, Joost Van de Weijer, & Jordi Gonzalez. (2012). Discriminative Compact Pyramids for Object and Scene Recognition. PR - Pattern Recognition, 45(4), 1627–1636.
Abstract: Spatial pyramids have been successfully applied to incorporating spatial information into bag-of-words based image representation. However, a major drawback is that it leads to high dimensional image representations. In this paper, we present a novel framework for obtaining compact pyramid representation. First, we investigate the usage of the divisive information theoretic feature clustering (DITC) algorithm in creating a compact pyramid representation. In many cases this method allows us to reduce the size of a high dimensional pyramid representation up to an order of magnitude with little or no loss in accuracy. Furthermore, comparison to clustering based on agglomerative information bottleneck (AIB) shows that our method obtains superior results at significantly lower computational costs. Moreover, we investigate the optimal combination of multiple features in the context of our compact pyramid representation. Finally, experiments show that the method can obtain state-of-the-art results on several challenging data sets.
|
|
Susana Alvarez, & Maria Vanrell. (2012). Texton theory revisited: a bag-of-words approach to combine textons. PR - Pattern Recognition, 45(12), 4312–4325.
Abstract: The aim of this paper is to revisit an old theory of texture perception and
update its computational implementation by extending it to colour. With this in mind we try to capture the optimality of perceptual systems. This is achieved in the proposed approach by sharing well-known early stages of the visual processes and extracting low-dimensional features that perfectly encode adequate properties for a large variety of textures without needing further learning stages. We propose several descriptors in a bag-of-words framework that are derived from different quantisation models on to the feature spaces. Our perceptual features are directly given by the shape and colour attributes of image blobs, which are the textons. In this way we avoid learning visual words and directly build the vocabularies on these lowdimensionaltexton spaces. Main differences between proposed descriptors rely on how co-occurrence of blob attributes is represented in the vocabularies. Our approach overcomes current state-of-art in colour texture description which is proved in several experiments on large texture datasets.
|
|
Xavier Roca, Jordi Vitria, Maria Vanrell, & Juan J. Villanueva. (2000). Visual behaviours for binocular navigation with autonomous systems..
|
|
Francesc Tous, Maria Vanrell, & Ramon Baldrich. (2005). Relaxed Grey-World: Computational Colour Constancy by Surface Matching. In Pattern Recognition and Image Analysis (IbPRIA 2005), LNCS 3522:192–199.
|
|
Fernando Lopez, J.M. Valiente, Ramon Baldrich, & Maria Vanrell. (2005). Fast surface grading using color statistics in the CIELab space. In Pattern Recognition and Image Analysis. IbPRIA 2005 (Vol. LNCS 3523, pp. 66–673).
|
|
Xavier Otazu, & Oriol Pujol. (2006). Wavelet based approach to cluster analysis. Application on low dimensional data sets. PRL - Pattern Recognition Letters, 27(14), 1590–1605.
|
|
Ivet Rafegas, Maria Vanrell, Luis A Alexandre, & G. Arias. (2020). Understanding trained CNNs by indexing neuron selectivity. PRL - Pattern Recognition Letters, 136, 318–325.
Abstract: The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful.
|
|
Domicele Jonauskaite, Lucia Camenzind, C. Alejandro Parraga, Cecile N Diouf, Mathieu Mercapide Ducommun, Lauriane Müller, et al. (2021). Colour-emotion associations in individuals with red-green colour blindness. PeerJ, 9, e11180.
Abstract: Colours and emotions are associated in languages and traditions. Some of us may convey sadness by saying feeling blue or by wearing black clothes at funerals. The first example is a conceptual experience of colour and the second example is an immediate perceptual experience of colour. To investigate whether one or the other type of experience more strongly drives colour-emotion associations, we tested 64 congenitally red-green colour-blind men and 66 non-colour-blind men. All participants associated 12 colours, presented as terms or patches, with 20 emotion concepts, and rated intensities of the associated emotions. We found that colour-blind and non-colour-blind men associated similar emotions with colours, irrespective of whether colours were conveyed via terms (r = .82) or patches (r = .80). The colour-emotion associations and the emotion intensities were not modulated by participants' severity of colour blindness. Hinting at some additional, although minor, role of actual colour perception, the consistencies in associations for colour terms and patches were higher in non-colour-blind than colour-blind men. Together, these results suggest that colour-emotion associations in adults do not require immediate perceptual colour experiences, as conceptual experiences are sufficient.
Keywords: Affect; Chromotherapy; Colour cognition; Colour vision deficiency; Cross-modal correspondences; Daltonism; Deuteranopia; Dichromatic; Emotion; Protanopia.
|
|
Javier Vazquez, C. Alejandro Parraga, & Maria Vanrell. (2009). Ordinal pairwise method for natural images comparison. PER - Perception, 38, 180.
Abstract: 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
|
|
Robert Benavente, C. Alejandro Parraga, & Maria Vanrell. (2009). Colour categories boundaries are better defined in contextual conditions. PER - Perception, 38, 36.
Abstract: In a previous experiment [Parraga et al, 2009 Journal of Imaging Science and Technology 53(3)] the boundaries between basic colour categories were measured by asking subjects to categorize colour samples presented in isolation (ie on a dark background) using a YES/NO paradigm. Results showed that some boundaries (eg green – blue) were very diffuse and the subjects' answers presented bimodal distributions, which were attributed to the emergence of non-basic categories in those regions (eg turquoise). To confirm these results we performed a new experiment focussed on the boundaries where bimodal distributions were more evident. In this new experiment rectangular colour samples were presented surrounded by random colour patches to simulate contextual conditions on a calibrated CRT monitor. The names of two neighbouring colours were shown at the bottom of the screen and subjects selected the boundary between these colours by controlling the chromaticity of the central patch, sliding it across these categories' frontier. Results show that in this new experimental paradigm, the formerly uncertain inter-colour category boundaries are better defined and the dispersions (ie the bimodal distributions) that occurred in the previous experiment disappear. These results may provide further support to Berlin and Kay's basic colour terms theory.
|
|
C. Alejandro Parraga, Javier Vazquez, & Maria Vanrell. (2009). A new cone activation-based natural images dataset. PER - Perception, 36, 180.
Abstract: We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.
|
|