|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2008). Edge Classification for Color Constancy. In 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings (231–234).
|
|
|
Javier Vazquez, Maria Vanrell, & Ramon Baldrich. (2008). Towards a Psychophysical Evaluation of Colour Constancy Algorithms. In 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings (372–377).
|
|
|
C. Alejandro Parraga, Robert Benavente, Maria Vanrell, & Ramon Baldrich. (2008). Modelling Inter-Colour Regions of Colour Naming Space. In 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings (218–222).
|
|
|
Angel Sappa, Patricia Suarez, Henry Velesaca, & Dario Carpio. (2022). Domain Adaptation in Image Dehazing: Exploring the Usage of Images from Virtual Scenarios. In 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (pp. 85–92).
Abstract: This work presents a novel domain adaptation strategy for deep learning-based approaches to solve the image dehazing
problem. Firstly, a large set of synthetic images is generated by using a realistic 3D graphic simulator; these synthetic
images contain different densities of haze, which are used for training the model that is later adapted to any real scenario.
The adaptation process requires just a few images to fine-tune the model parameters. The proposed strategy allows
overcoming the limitation of training a given model with few images. In other words, the proposed strategy implements
the adaptation of a haze removal model trained with synthetic images to real scenarios. It should be noticed that it is quite
difficult, if not impossible, to have large sets of pairs of real-world images (with and without haze) to train in a supervised
way dehazing algorithms. Experimental results are provided showing the validity of the proposed domain adaptation
strategy.
Keywords: Domain adaptation; Synthetic hazed dataset; Dehazing
|
|
|
Sergio Escalera, Alicia Fornes, Oriol Pujol, Josep Llados, & Petia Radeva. (2007). Multi-class Binary Object Categorization using Blurred Shape Models. In Progress in Pattern Recognition, Image Analysis and Applications, 12th Iberoamerican Congress on Pattern (Vol. 4756, 773–782). LCNS.
|
|
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2010). Graph of Words Embedding for Molecular Structure-Activity Relationship Analysis. In 15th Iberoamerican Congress on Pattern Recognition (Vol. 6419, 30–37). LNCS.
Abstract: Structure-Activity relationship analysis aims at discovering chemical activity of molecular compounds based on their structure. In this article we make use of a particular graph representation of molecules and propose a new graph embedding procedure to solve the problem of structure-activity relationship analysis. The embedding is essentially an arrangement of a molecule in the form of a vector by considering frequencies of appearing atoms and frequencies of covalent bonds between them. Results on two benchmark databases show the effectiveness of the proposed technique in terms of recognition accuracy while avoiding high operational costs in the transformation.
|
|
|
Ekaterina Zaytseva, Santiago Segui, & Jordi Vitria. (2012). Sketchable Histograms of Oriented Gradients for Object Detection. In 17th Iberomerican Conference on Pattern Recognition (Vol. 7441, pp. 374–381). Springer Berlin Heidelberg.
Abstract: In this paper we investigate a new representation approach for visual object recognition. The new representation, called sketchable-HoG, extends the classical histogram of oriented gradients (HoG) feature by adding two different aspects: the stability of the majority orientation and the continuity of gradient orientations. In this way, the sketchable-HoG locally characterizes the complexity of an object model and introduces global structure information while still keeping simplicity, compactness and robustness. We evaluated the proposed image descriptor on publicly Catltech 101 dataset. The obtained results outperforms classical HoG descriptor as well as other reported descriptors in the literature.
|
|
|
Juan A. Carvajal Ayala, Dennis Romero, & Angel Sappa. (2016). Fine-tuning based deep convolutional networks for lepidopterous genus recognition. In 21st Ibero American Congress on Pattern Recognition (pp. 467–475). LNCS.
Abstract: This paper describes an image classification approach oriented to identify specimens of lepidopterous insects at Ecuadorian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butterflies and also to facilitate the registration of unrecognized specimens. The proposed approach is based on the fine-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists is presented, reaching a recognition accuracy above 92%.
|
|
|
Mohamed Ramzy Ibrahim, Robert Benavente, Daniel Ponsa, & Felipe Lumbreras. (2023). Unveiling the Influence of Image Super-Resolution on Aerial Scene Classification. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (Vol. 14469, 214–228). LNCS.
Abstract: Deep learning has made significant advances in recent years, and as a result, it is now in a stage where it can achieve outstanding results in tasks requiring visual understanding of scenes. However, its performance tends to decline when dealing with low-quality images. The advent of super-resolution (SR) techniques has started to have an impact on the field of remote sensing by enabling the restoration of fine details and enhancing image quality, which could help to increase performance in other vision tasks. However, in previous works, contradictory results for scene visual understanding were achieved when SR techniques were applied. In this paper, we present an experimental study on the impact of SR on enhancing aerial scene classification. Through the analysis of different state-of-the-art SR algorithms, including traditional methods and deep learning-based approaches, we unveil the transformative potential of SR in overcoming the limitations of low-resolution (LR) aerial imagery. By enhancing spatial resolution, more fine details are captured, opening the door for an improvement in scene understanding. We also discuss the effect of different image scales on the quality of SR and its effect on aerial scene classification. Our experimental work demonstrates the significant impact of SR on enhancing aerial scene classification compared to LR images, opening new avenues for improved remote sensing applications.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, C. Malagelada, & Petia Radeva. (2006). A Machine Learning framework using SOMs: Applications in the Intestinal Motility Assessment. In J.P. Martinez–Trinidad et al (Ed.), 11th Iberoamerican Congress on Pattern Recognition (Vol. 4225, 188–197). LNCS. Berlin-Heidelberg: Springer Verlag.
Abstract: Small Bowel Motility Assessment by means of Wireless Capsule Video Endoscopy constitutes a novel clinical methodology in which a capsule with a micro-camera attached to it is swallowed by the patient, emitting a RF signal which is recorded as a video of its trip throughout the gut. In order to overcome the main drawbacks associated with this technique -mainly related to the large amount of visualization time required-, our efforts have been focused on the development of a machine learning system, built up in sequential stages, which provides the specialists with the useful part of the video, rejecting those parts not valid for analysis. We successfully used Self Organized Maps in a general semi-supervised framework with the aim of tackling the different learning stages of our system. The analysis of the diverse types of images and the automatic detection of intestinal contractions is performed under the perspective of intestinal motility assessment in a clinical environment.
|
|
|
Debora Gil, Petia Radeva, Jordi Saludes, & Josefina Mauri. (2000). Automatic Segmentation of Artery Wall in Coronary IVUS Images: a Probabilistic Approach. In Proceedings of CIC’2000. Cambridge, Massachussets.
Abstract: Intravascular ultrasound images represent a unique tool to analyze the morphology of arteries and vessels (plaques, restenosis, etc). The poor quality of these images makes unsupervised segmentation based on traditional segmentation algorithms (such as edge or ridge/valley detection) fail to achieve the expected results. In this paper we present a probabilistic flexible template to separate different regions in the image. In particular, we use elliptic templates to model and detect the shape of the vessel inner wall in IVUS images. We present the results of successful segmentation obtained from patients undergoing stent treatment. A physician team has validated these results.
|
|
|
Ivet Rafegas, & Maria Vanrell. (2016). Color spaces emerging from deep convolutional networks. In 24th Color and Imaging Conference (pp. 225–230).
Abstract: Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers.
|
|
|
Hassan Ahmed Sial, S. Sancho, Ramon Baldrich, Robert Benavente, & Maria Vanrell. (2018). Color-based data augmentation for Reflectance Estimation. In 26th Color Imaging Conference (pp. 284–289).
Abstract: Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.
|
|
|
Graham D. Finlayson, Javier Vazquez, & Fufu Fang. (2021). The Discrete Cosine Maximum Ignorance Assumption. In 29th Color and Imaging Conference (pp. 13–18).
Abstract: the performance of colour correction algorithms are dependent on the reflectance sets used. Sometimes, when the testing reflectance set is changed the ranking of colour correction algorithms also changes. To remove dependence on dataset we can
make assumptions about the set of all possible reflectances. In the Maximum Ignorance with Positivity (MIP) assumption we assume that all reflectances with per wavelength values between 0 and 1 are equally likely. A weakness in the MIP is that it fails to take into account the correlation of reflectance functions between
wavelengths (many of the assumed reflectances are, in reality, not possible).
In this paper, we take the view that the maximum ignorance assumption has merit but, hitherto it has been calculated with respect to the wrong coordinate basis. Here, we propose the Discrete Cosine Maximum Ignorance assumption (DCMI), where
all reflectances that have coordinates between max and min bounds in the Discrete Cosine Basis coordinate system are equally likely.
Here, the correlation between wavelengths is encoded and this results in the set of all plausible reflectances ’looking like’ typical reflectances that occur in nature. This said the DCMI model is also a superset of all measured reflectance sets.
Experiments show that, in colour correction, adopting the DCMI results in similar colour correction performance as using a particular reflectance set.
|
|
|
Trevor Canham, Javier Vazquez, D Long, Richard F. Murray, & Michael S Brown. (2021). Noise Prism: A Novel Multispectral Visualization Technique. 31st Color and Imaging Conference, .
Abstract: A novel technique for visualizing multispectral images is proposed. Inspired by how prisms work, our method spreads spectral information over a chromatic noise pattern. This is accomplished by populating the pattern with pixels representing each measurement band at a count proportional to its measured intensity. The method is advantageous because it allows for lightweight encoding and visualization of spectral information
while maintaining the color appearance of the stimulus. A four alternative forced choice (4AFC) experiment was conducted to validate the method’s information-carrying capacity in displaying metameric stimuli of varying colors and spectral basis functions. The scores ranged from 100% to 20% (less than chance given the 4AFC task), with many conditions falling somewhere in between at statistically significant intervals. Using this data, color and texture difference metrics can be evaluated and optimized to predict the legibility of the visualization technique.
|
|