Albert Gordo. (2009). A Cyclic Page Layout Descriptor for Document Classification & Retrieval (Vol. 128). Master's thesis, , Bellaterra, Barcelona.
|
Joost Van de Weijer, Robert Benavente, Maria Vanrell, Cordelia Schmid, Ramon Baldrich, Jacob Verbeek, et al. (2012). Color Naming. In Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, & Jan-Mark Geusebroek (Eds.), Color in Computer Vision: Fundamentals and Applications (pp. 287–317). John Wiley & Sons, Ltd.
|
Ernest Valveny, Robert Benavente, Agata Lapedriza, Miquel Ferrer, Jaume Garcia, & Gemma Sanchez. (2012). Adaptation of a computer programming course to the EXHE requirements: evaluation five years later (Vol. 37).
|
David Augusto Rojas. (2009). Colouring Local Feature Detection for Matching (Vol. 133). Master's thesis, , Bellaterra, Barcelona.
|
Olivier Penacchio. (2009). Relative Density of L, M, S photoreceptors in the Human Retina (Vol. 135). Master's thesis, , Bellaterra, Barcelona.
|
Xavier Boix. (2009). Learning Conditional Random Fields for Stereo (Vol. 136). Master's thesis, , Bellaterra, Barcelona.
|
Shida Beigpour. (2009). Physics-based Reflectance Estimation Applied to Recoloring (Vol. 137). Master's thesis, , Bellaterra, Barcelona.
|
Jose Carlos Rubio. (2009). Graph matching based on graphical models with application to vehicle tracking and classification at night (Vol. 144). Master's thesis, , Bellaterra, Barcelona.
|
Ivet Rafegas. (2013). Exploring Low-Level Vision Models. Case Study: Saliency Prediction (Vol. 175). Master's thesis, , .
|
C. Alejandro Parraga. (2015). Perceptual Psychophysics. In G.Cristobal, M.Keil, & L.Perrinet (Eds.), Biologically-Inspired Computer Vision: Fundamentals and Applications.
|
Xavier Otazu, Olivier Penacchio, & Xim Cerda-Company. (2015). Brightness and colour induction through contextual influences in V1. In Scottish Vision Group 2015 SGV2015 (Vol. 12, pp. 1208–2012).
|
Olivier Penacchio, Xavier Otazu, A. wilkins, & J. Harris. (2015). Uncomfortable images prevent lateral interactions in the cortex from providing a sparse code. In European Conference on Visual Perception ECVP2015.
|
Xavier Otazu, Olivier Penacchio, & Xim Cerda-Company. (2015). An excitatory-inhibitory firing rate model accounts for brightness induction, colour induction and visual discomfort. In Barcelona Computational, Cognitive and Systems Neuroscience.
|
Ivet Rafegas, & Maria Vanrell. (2016). Colour Visual Coding in trained Deep Neural Networks. In European Conference on Visual Perception.
|
Jordi Roca, A.Owen, G.Jordan, Y.Ling, C. Alejandro Parraga, & A.Hurlbert. (2011). Inter-individual Variations in Color Naming and the Structure of 3D Color Space. In Journal of Vision (Vol. 12, 166).
Abstract: 36.307
Many everyday behavioural uses of color vision depend on color naming ability, which is neither measured nor predicted by most standardized tests of color vision, for either normal or anomalous color vision. Here we demonstrate a new method to quantify color naming ability by deriving a compact computational description of individual 3D color spaces. Methods: Individual observers underwent standardized color vision diagnostic tests (including anomaloscope testing) and a series of custom-made color naming tasks using 500 distinct color samples, either CRT stimuli (“light”-based) or Munsell chips (“surface”-based), with both forced- and free-choice color naming paradigms. For each subject, we defined his/her color solid as the set of 3D convex hulls computed for each basic color category from the relevant collection of categorised points in perceptually uniform CIELAB space. From the parameters of the convex hulls, we derived several indices to characterise the 3D structure of the color solid and its inter-individual variations. Using a reference group of 25 normal trichromats (NT), we defined the degree of normality for the shape, location and overlap of each color region, and the extent of “light”-“surface” agreement. Results: Certain features of color perception emerge from analysis of the average NT color solid, e.g.: (1) the white category is slightly shifted towards blue; and (2) the variability in category border location across NT subjects is asymmetric across color space, with least variability in the blue/green region. Comparisons between individual and average NT indices reveal specific naming “deficits”, e.g.: (1) Category volumes for white, green, brown and grey are expanded for anomalous trichromats and dichromats; and (2) the focal structure of color space is disrupted more in protanopia than other forms of anomalous color vision. The indices both capture the structure of subjective color spaces and allow us to quantify inter-individual differences in color naming ability.
|
Javier Vazquez, C. Alejandro Parraga, & Maria Vanrell. (2009). Ordinal pairwise method for natural images comparison. PER - Perception, 38, 180.
Abstract: 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
|
Marcos V Conde, Javier Vazquez, Michael S Brown, & Radu TImofte. (2024). NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement. In 38th AAAI Conference on Artificial Intelligence.
Abstract: 3D lookup tables (3D LUTs) are a key component for image enhancement. Modern image signal processors (ISPs) have dedicated support for these as part of the camera rendering pipeline. Cameras typically provide multiple options for picture styles, where each style is usually obtained by applying a unique handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is required. For this reason and other implementation limitations, their use on mobile devices is less popular. In this work, we propose a Neural Implicit LUT (NILUT), an implicitly defined continuous 3D color transformation parameterized by a neural network. We show that NILUTs are capable of accurately emulating real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles into a single network with the ability to blend styles implicitly. Our novel approach is memory-efficient, controllable and can complement previous methods, including learned ISPs.
|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2012). Improving Color Constancy by Photometric Edge Weighting. TPAMI - IEEE Transaction on Pattern Analysis and Machine Intelligence, 34(5), 918–929.
Abstract: : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.
|