Jaime Moreno, & Xavier Otazu. (2011). Image coder based on Hilbert scanning of embedded quadTrees. In Data Compression Conference (p. 470).
Abstract: In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels.
|
|
Jaime Moreno, & Xavier Otazu. (2011). Image compression algorithm based on Hilbert scanning of embedded quadTrees: an introduction of the Hi-SET coder. In IEEE International Conference on Multimedia and Expo (pp. 1–6).
Abstract: In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels. The implementation of the proposed coder is developed for gray-scale and color image compression. Hi-SET compressed images are, on average, 6.20dB better than the ones obtained by other compression techniques based on the Hilbert scanning. Moreover, Hi-SET improves the image quality in 1.39dB and 1.00dB in gray-scale and color compression, respectively, when compared with JPEG2000 coder.
|
|
Susana Alvarez, Xavier Otazu, & Maria Vanrell. (2005). Image Segmentation Based on Inter-Feature Distance Maps. In Frontiers in Artificial Intelligence and Applications, IOS Press, 131: 75–82.
|
|
Eduard Vazquez, Joost Van de Weijer, & Ramon Baldrich. (2008). Image Segmentation in the Presence of Shadows and Highligts. In 10th European Conference on Computer Vision (Vol. 5305, 1–14). LNCS.
|
|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2012). Improving Color Constancy by Photometric Edge Weighting. TPAMI - IEEE Transaction on Pattern Analysis and Machine Intelligence, 34(5), 918–929.
Abstract: : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.
|
|
O. Fors, J. Nuñez, Xavier Otazu, A. Prades, & Robert D. Cardinal. (2010). Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques. SENS - Sensors, 10(3), 1743–1752.
Abstract: Abstract: In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Keywords: image processing; image deconvolution; faint stars; space debris; wavelet transform
|
|
Maria Vanrell, Ramon Baldrich, Anna Salvatella, Robert Benavente, & Francesc Tous. (2004). Induction operators for a computational colour-texture representation. Computer Vision and Image Understanding, 94(1–3):92–114, ISSN: 1077–3142 (IF: 0.651).
|
|
Danna Xue, Javier Vazquez, Luis Herranz, Yang Zhang, & Michael S Brown. (2023). Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring. CGF - Computer Graphics Forum, .
Abstract: Achieving visually consistent colors across multiple images is important when images are used in photo albums, websites, and brochures. Unfortunately, only a handful of methods address multi-image color consistency compared to one-to-one color transfer techniques. Furthermore, existing methods do not incorporate high-level features that can assist graphic designers in their work. To address these limitations, we introduce a framework that builds upon a previous palette-based color consistency method and incorporates three high-level features: white balance, saliency, and color naming. We show how these features overcome the limitations of the prior multi-consistency workflow and showcase the user-friendly nature of our framework.
|
|
Jordi Roca, A.Owen, G.Jordan, Y.Ling, C. Alejandro Parraga, & A.Hurlbert. (2011). Inter-individual Variations in Color Naming and the Structure of 3D Color Space. In Journal of Vision (Vol. 12, 166).
Abstract: 36.307
Many everyday behavioural uses of color vision depend on color naming ability, which is neither measured nor predicted by most standardized tests of color vision, for either normal or anomalous color vision. Here we demonstrate a new method to quantify color naming ability by deriving a compact computational description of individual 3D color spaces. Methods: Individual observers underwent standardized color vision diagnostic tests (including anomaloscope testing) and a series of custom-made color naming tasks using 500 distinct color samples, either CRT stimuli (“light”-based) or Munsell chips (“surface”-based), with both forced- and free-choice color naming paradigms. For each subject, we defined his/her color solid as the set of 3D convex hulls computed for each basic color category from the relevant collection of categorised points in perceptually uniform CIELAB space. From the parameters of the convex hulls, we derived several indices to characterise the 3D structure of the color solid and its inter-individual variations. Using a reference group of 25 normal trichromats (NT), we defined the degree of normality for the shape, location and overlap of each color region, and the extent of “light”-“surface” agreement. Results: Certain features of color perception emerge from analysis of the average NT color solid, e.g.: (1) the white category is slightly shifted towards blue; and (2) the variability in category border location across NT subjects is asymmetric across color space, with least variability in the blue/green region. Comparisons between individual and average NT indices reveal specific naming “deficits”, e.g.: (1) Category volumes for white, green, brown and grey are expanded for anomalous trichromats and dichromats; and (2) the focal structure of color space is disrupted more in protanopia than other forms of anomalous color vision. The indices both capture the structure of subjective color spaces and allow us to quantify inter-individual differences in color naming ability.
|
|
Joost Van de Weijer, Fahad Shahbaz Khan, & Marc Masana. (2013). Interactive Visual and Semantic Image Retrieval. In Angel Sappa, & Jordi Vitria (Eds.), Multimodal Interaction in Image and Video Applications (Vol. 48, pp. 31–35). Springer Berlin Heidelberg.
Abstract: One direct consequence of recent advances in digital visual data generation and the direct availability of this information through the World-Wide Web, is a urgent demand for efficient image retrieval systems. The objective of image retrieval is to allow users to efficiently browse through this abundance of images. Due to the non-expert nature of the majority of the internet users, such systems should be user friendly, and therefore avoid complex user interfaces. In this chapter we investigate how high-level information provided by recently developed object recognition techniques can improve interactive image retrieval. Wel apply a bagof- word based image representation method to automatically classify images in a number of categories. These additional labels are then applied to improve the image retrieval system. Next to these high-level semantic labels, we also apply a low-level image description to describe the composition and color scheme of the scene. Both descriptions are incorporated in a user feedback image retrieval setting. The main objective is to show that automatic labeling of images with semantic labels can improve image retrieval results.
|
|
Sagnik Das, Hassan Ahmed Sial, Ke Ma, Ramon Baldrich, Maria Vanrell, & Dimitris Samaras. (2020). Intrinsic Decomposition of Document Images In-the-Wild. In 31st British Machine Vision Conference.
Abstract: Automatic document content processing is affected by artifacts caused by the shape
of the paper, non-uniform and diverse color of lighting conditions. Fully-supervised
methods on real data are impossible due to the large amount of data needed. Hence, the
current state of the art deep learning models are trained on fully or partially synthetic images. However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models. In this paper we tackle these problems with our two main contributions. First, a physically constrained learning-based method that directly estimates document reflectance based on intrinsic image formation which generalizes to challenging illumination conditions. Second, a new dataset that clearly improves previous synthetic ones, by adding a large range of realistic shading and diverse multi-illuminant conditions, uniquely customized to deal with documents in-the-wild. The proposed architecture works in two steps. First, a white balancing module neutralizes the color of the illumination on the input image. Based on the proposed multi-illuminant dataset we achieve a good white-balancing in really difficult conditions. Second, the shading separation module accurately disentangles the shading and paper material in a self-supervised manner where only the synthetic texture is used as a weak training signal (obviating the need for very costly ground truth with disentangled versions of shading and reflectance). The proposed approach leads to significant generalization of document reflectance estimation in real scenes with challenging illumination. We extensively evaluate on the real benchmark datasets available for intrinsic image decomposition and document shadow removal tasks. Our reflectance estimation scheme, when used as a pre-processing step of an OCR pipeline, shows a 21% improvement of character error rate (CER), thus, proving the practical applicability. The data and code will be available at: https://github.com/cvlab-stonybrook/DocIIW.
|
|
Shida Beigpour, Marc Serra, Joost Van de Weijer, Robert Benavente, Maria Vanrell, Olivier Penacchio, et al. (2013). Intrinsic Image Evaluation On Synthetic Complex Scenes. In 20th IEEE International Conference on Image Processing (pp. 285–289).
Abstract: Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes.
|
|
Xavier Otazu, M. Gonzalez-Audicana, O. Fors, & J. Nuñez. (2005). Introduction of Sensor Spectral Response Into Image Fusion Methods. Application to Wavelet-Based Methods. IEEE Transactions on Geoscience and Remote Sensing, 43(10): 2376–2385 (IF: 1.627).
|
|
Robert Benavente, C. Alejandro Parraga, & Maria Vanrell. (2010). La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas. In 9th Congreso Nacional del Color (92–95).
Abstract: En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto.
Keywords: Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos
|
|
Naila Murray, & Eduard Vazquez. (2010). Lacuna Restoration: How to choose a neutral colour? In Proceedings of The CREATE 2010 Conference (248–252).
Abstract: Painting restoration which involves filling in material loss (called lacuna) is a complex process. Several standard techniques exist to tackle lacuna restoration,
and this article focuses on those techniques that employ a “neutral” colour to mask the defect. Restoration experts often disagree on the choice of such a colour and in fact, the concept of a neutral colour is controversial. We posit that a neutral colour is one that attracts relatively little visual attention for a specific lacuna. We conducted an eye tracking experiment to compare two common neutral
colour selection methods, specifically the most common local colour and the mean local colour. Results obtained demonstrate that the most common local colour triggers less visual attention in general. Notwithstanding, we have observed instances in which the most common colour triggers a significant amount of attention when subjects spent time resolving their confusion about whether or not a lacuna was part of the painting.
|
|
Xavier Boix. (2009). Learning Conditional Random Fields for Stereo (Vol. 136). Master's thesis, , Bellaterra, Barcelona.
|
|
Naila Murray, Luca Marchesotti, & Florent Perronnin. (2012). Learning to Rank Images using Semantic and Aesthetic Labels. In 23rd British Machine Vision Conference (110.pp. 1–110.10).
Abstract: Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics/visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recently-published large-scale dataset and we show that the second family of techniques significantly outperforms the first one.
|
|
Hassan Ahmed Sial, Ramon Baldrich, Maria Vanrell, & Dimitris Samaras. (2020). Light Direction and Color Estimation from Single Image with Deep Regression. In London Imaging Conference.
Abstract: We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes.
|
|