|   | 
Details
   web
Records Links
Author Fahad Shahbaz Khan; Joost Van de Weijer; Muhammad Anwer Rao; Michael Felsberg; Carlo Gatta edit   pdf
doi  openurl
Title Semantic Pyramids for Gender and Action Recognition Type Journal Article
Year 2014 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
Volume 23 Issue 8 Pages 3633-3645  
Keywords  
Abstract Person description is a challenging problem in computer vision. We investigated two major aspects of person description: 1) gender and 2) action recognition in still images. Most state-of-the-art approaches for gender and action recognition rely on the description of a single body part, such as face or full-body. However, relying on a single body part is suboptimal due to significant variations in scale, viewpoint, and pose in real-world images. This paper proposes a semantic pyramid approach for pose normalization. Our approach is fully automatic and based on combining information from full-body, upper-body, and face regions for gender and action recognition in still images. The proposed approach does not require any annotations for upper-body and face of a person. Instead, we rely on pretrained state-of-the-art upper-body and face detectors to automatically extract semantic information of a person. Given multiple bounding boxes from each body part detector, we then propose a simple method to select the best candidate bounding box, which is used for feature extraction. Finally, the extracted features from the full-body, upper-body, and face regions are combined into a single representation for classification. To validate the proposed approach for gender recognition, experiments are performed on three large data sets namely: 1) human attribute; 2) head-shoulder; and 3) proxemics. For action recognition, we perform experiments on four data sets most used for benchmarking action recognition in still images: 1) Sports; 2) Willow; 3) PASCAL VOC 2010; and 4) Stanford-40. Our experiments clearly demonstrate that the proposed approach, despite its simplicity, outperforms state-of-the-art methods for gender and action recognition.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1057-7149 ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 601.160; 600.074; 600.079;MILAB Approved no  
Call Number Admin @ si @ KWR2014 Serial 2507  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Shida Beigpour; Joost Van de Weijer; Michael Felsberg edit  doi
openurl 
Title Painting-91: A Large Scale Database for Computational Painting Categorization Type Journal Article
Year 2014 Publication Machine Vision and Applications Abbreviated Journal MVAP  
Volume 25 Issue 6 Pages 1385-1397  
Keywords  
Abstract Computer analysis of visual art, especially paintings, is an interesting cross-disciplinary research domain. Most of the research in the analysis of paintings involve medium to small range datasets with own specific settings. Interestingly, significant progress has been made in the field of object and scene recognition lately. A key factor in this success is the introduction and availability of benchmark datasets for evaluation. Surprisingly, such a benchmark setup is still missing in the area of computational painting categorization. In this work, we propose a novel large scale dataset of digital paintings. The dataset consists of paintings from 91 different painters. We further show three applications of our dataset namely: artist categorization, style classification and saliency detection. We investigate how local and global features popular in image classification perform for the tasks of artist and style categorization. For both categorization tasks, our experimental results suggest that combining multiple features significantly improves the final performance. We show that state-of-the-art computer vision methods can correctly classify 50 % of unseen paintings to its painter in a large dataset and correctly attribute its artistic style in over 60 % of the cases. Additionally, we explore the task of saliency detection on paintings and show experimental findings using state-of-the-art saliency estimation algorithms.  
Address (up)  
Corporate Author Thesis  
Publisher Springer Berlin Heidelberg Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0932-8092 ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 600.074; 600.079 Approved no  
Call Number Admin @ si @ KBW2014 Serial 2510  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Dimosthenis Karatzas; Sophie Wuerger edit   pdf
url  doi
openurl 
Title Limitations of visual gamma corrections in LCD displays Type Journal Article
Year 2014 Publication Displays Abbreviated Journal Dis  
Volume 35 Issue 5 Pages 227–239  
Keywords Display calibration; Psychophysics; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration  
Abstract A method for estimating the non-linear gamma transfer function of liquid–crystal displays (LCDs) without the need of a photometric measurement device was described by Xiao et al. (2011) [1]. It relies on observer’s judgments of visual luminance by presenting eight half-tone patterns with luminances from 1/9 to 8/9 of the maximum value of each colour channel. These half-tone patterns were distributed over the screen both over the vertical and horizontal viewing axes. We conducted a series of photometric and psychophysical measurements (consisting in the simultaneous presentation of half-tone patterns in each trial) to evaluate whether the angular dependency of the light generated by three different LCD technologies would bias the results of these gamma transfer function estimations. Our results show that there are significant differences between the gamma transfer functions measured and produced by observers at different viewing angles. We suggest appropriate modifications to the Xiao et al. paradigm to counterbalance these artefacts which also have the advantage of shortening the amount of time spent in collecting the psychophysical measurements.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; DAG; 600.052; 600.077; 600.074 Approved no  
Call Number Admin @ si @ PRK2014 Serial 2511  
Permanent link to this record
 

 
Author C. Alejandro Parraga edit  doi
isbn  openurl
Title Color Vision, Computational Methods for Type Book Chapter
Year 2014 Publication Encyclopedia of Computational Neuroscience Abbreviated Journal  
Volume Issue Pages 1-11  
Keywords Color computational vision; Computational neuroscience of color  
Abstract The study of color vision has been aided by a whole battery of computational methods that attempt to describe the mechanisms that lead to our perception of colors in terms of the information-processing properties of the visual system. Their scope is highly interdisciplinary, linking apparently dissimilar disciplines such as mathematics, physics, computer science, neuroscience, cognitive science, and psychology. Since the sensation of color is a feature of our brains, computational approaches usually include biological features of neural systems in their descriptions, from retinal light-receptor interaction to subcortical color opponency, cortical signal decoding, and color categorization. They produce hypotheses that are usually tested by behavioral or psychophysical experiments.  
Address (up)  
Corporate Author Thesis  
Publisher Springer-Verlag Berlin Heidelberg Place of Publication Editor Dieter Jaeger; Ranu Jung  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 978-1-4614-7320-6 Medium  
Area Expedition Conference  
Notes CIC; 600.074 Approved no  
Call Number Admin @ si @ Par2014 Serial 2512  
Permanent link to this record
 

 
Author Xim Cerda-Company; C. Alejandro Parraga; Xavier Otazu edit  openurl
Title Which tone-mapping is the best? A comparative study of tone-mapping perceived quality Type Abstract
Year 2014 Publication Perception Abbreviated Journal  
Volume 43 Issue Pages 106  
Keywords  
Abstract Perception 43 ECVP Abstract Supplement
High-dynamic-range (HDR) imaging refers to the methods designed to increase the brightness dynamic range present in standard digital imaging techniques. This increase is achieved by taking the same picture under di erent exposure values and mapping the intensity levels into a single image by way of a tone-mapping operator (TMO). Currently, there is no agreement on how to evaluate the quality
of di erent TMOs. In this work we psychophysically evaluate 15 di erent TMOs obtaining rankings based on the perceived properties of the resulting tone-mapped images. We performed two di erent experiments on a CRT calibrated display using 10 subjects: (1) a study of the internal relationships between grey-levels and (2) a pairwise comparison of the resulting 15 tone-mapped images. In (1) observers internally matched the grey-levels to a reference inside the tone-mapped images and in the real scene. In (2) observers performed a pairwise comparison of the tone-mapped images alongside the real scene. We obtained two rankings of the TMOs according their performance. In (1) the best algorithm
was ICAM by J.Kuang et al (2007) and in (2) the best algorithm was a TMO by Krawczyk et al (2005). Our results also show no correlation between these two rankings.
 
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ECVP  
Notes CIC; NEUROBIT; 600.074 Approved no  
Call Number Admin @ si @ CPO2014 Serial 2527  
Permanent link to this record
 

 
Author C. Alejandro Parraga edit  isbn
openurl 
Title Perceptual Psychophysics Type Book Chapter
Year 2015 Publication Biologically-Inspired Computer Vision: Fundamentals and Applications Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor G.Cristobal; M.Keil; L.Perrinet  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 978-3-527-41264-8 Medium  
Area Expedition Conference  
Notes CIC; 600.074 Approved no  
Call Number Admin @ si @ Par2015 Serial 2600  
Permanent link to this record
 

 
Author Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich edit  doi
isbn  openurl
Title DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition Type Conference Article
Year 2015 Publication Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II Abbreviated Journal  
Volume 9475 Issue Pages 463-473  
Keywords Projector-camera systems; Feature descriptors; Object recognition  
Abstract Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection.  
Address (up)  
Corporate Author Thesis  
Publisher Springer International Publishing Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title LNCS  
Series Volume Series Issue Edition  
ISSN 0302-9743 ISBN 978-3-319-27862-9 Medium  
Area Expedition Conference ISVC  
Notes CIC Approved no  
Call Number Admin @ si @ SMG2015 Serial 2736  
Permanent link to this record
 

 
Author Ivet Rafegas; Javier Vazquez; Robert Benavente; Maria Vanrell; Susana Alvarez edit  url
openurl 
Title Enhancing spatio-chromatic representation with more-than-three color coding for image description Type Journal Article
Year 2017 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
Volume 34 Issue 5 Pages 827-837  
Keywords  
Abstract Extraction of spatio-chromatic features from color images is usually performed independently on each color channel. Usual 3D color spaces, such as RGB, present a high inter-channel correlation for natural images. This correlation can be reduced using color-opponent representations, but the spatial structure of regions with small color differences is not fully captured in two generic Red-Green and Blue-Yellow channels. To overcome these problems, we propose a new color coding that is adapted to the specific content of each image. Our proposal is based on two steps: (a) setting the number of channels to the number of distinctive colors we find in each image (avoiding the problem of channel correlation), and (b) building a channel representation that maximizes contrast differences within each color channel (avoiding the problem of low local contrast). We call this approach more-than-three color coding (MTT) to enhance the fact that the number of channels is adapted to the image content. The higher color complexity an image has, the more channels can be used to represent it. Here we select distinctive colors as the most predominant in the image, which we call color pivots, and we build the new color coding using these color pivots as a basis. To evaluate the proposed approach we measure its efficiency in an image categorization task. We show how a generic descriptor improves its performance at the description level when applied on the MTT coding.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.087 Approved no  
Call Number Admin @ si @ RVB2017 Serial 2892  
Permanent link to this record
 

 
Author Jordi Roca edit  openurl
Title Constancy and inconstancy in categorical colour perception Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract To recognise objects is perhaps the most important task an autonomous system, either biological or artificial needs to perform. In the context of human vision, this is partly achieved by recognizing the colour of surfaces despite changes in the wavelength distribution of the illumination, a property called colour constancy. Correct surface colour recognition may be adequately accomplished by colour category matching without the need to match colours precisely, therefore categorical colour constancy is likely to play an important role for object identification to be successful. The main aim of this work is to study the relationship between colour constancy and categorical colour perception. Previous studies of colour constancy have shown the influence of factors such the spatio-chromatic properties of the background, individual observer's performance, semantics, etc. However there is very little systematic study of these influences. To this end, we developed a new approach to colour constancy which includes both individual observers' categorical perception, the categorical structure of the background, and their interrelations resulting in a more comprehensive characterization of the phenomenon. In our study, we first developed a new method to analyse the categorical structure of 3D colour space, which allowed us to characterize individual categorical colour perception as well as quantify inter-individual variations in terms of shape and centroid location of 3D categorical regions. Second, we developed a new colour constancy paradigm, termed chromatic setting, which allows measuring the precise location of nine categorically-relevant points in colour space under immersive illumination. Additionally, we derived from these measurements a new colour constancy index which takes into account the magnitude and orientation of the chromatic shift, memory effects and the interrelations among colours and a model of colour naming tuned to each observer/adaptation state. Our results lead to the following conclusions: (1) There exists large inter-individual variations in the categorical structure of colour space, and thus colour naming ability varies significantly but this is not well predicted by low-level chromatic discrimination ability; (2) Analysis of the average colour naming space suggested the need for an additional three basic colour terms (turquoise, lilac and lime) for optimal colour communication; (3) Chromatic setting improved the precision of more complex linear colour constancy models and suggested that mechanisms other than cone gain might be best suited to explain colour constancy; (4) The categorical structure of colour space is broadly stable under illuminant changes for categorically balanced backgrounds; (5) Categorical inconstancy exists for categorically unbalanced backgrounds thus indicating that categorical information perceived in the initial stages of adaptation may constrain further categorical perception.  
Address (up)  
Corporate Author Thesis Ph.D. thesis  
Publisher Place of Publication Editor Maria Vanrell;C. Alejandro Parraga  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Roc2012 Serial 2893  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit   pdf
url  doi
openurl 
Title Color encoding in biologically-inspired convolutional neural networks Type Journal Article
Year 2018 Publication Vision Research Abbreviated Journal VR  
Volume 151 Issue Pages 7-17  
Keywords Color coding; Computer vision; Deep learning; Convolutional neural networks  
Abstract Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.051; 600.087 Approved no  
Call Number Admin @ si @RaV2018 Serial 3114  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell; Luis A Alexandre; G. Arias edit   pdf
url  openurl
Title Understanding trained CNNs by indexing neuron selectivity Type Journal Article
Year 2020 Publication Pattern Recognition Letters Abbreviated Journal PRL  
Volume 136 Issue Pages 318-325  
Keywords  
Abstract The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.087; 600.140; 600.118 Approved no  
Call Number Admin @ si @ RVL2019 Serial 3310  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell edit   pdf
url  openurl
Title Deep intrinsic decomposition trained on surreal scenes yet with realistic light effects Type Journal Article
Year 2020 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
Volume 37 Issue 1 Pages 1-15  
Keywords  
Abstract Estimation of intrinsic images still remains a challenging task due to weaknesses of ground-truth datasets, which either are too small or present non-realistic issues. On the other hand, end-to-end deep learning architectures start to achieve interesting results that we believe could be improved if important physical hints were not ignored. In this work, we present a twofold framework: (a) a flexible generation of images overcoming some classical dataset problems such as larger size jointly with coherent lighting appearance; and (b) a flexible architecture tying physical properties through intrinsic losses. Our proposal is versatile, presents low computation time, and achieves state-of-the-art results.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.140; 600.12; 600.118 Approved no  
Call Number Admin @ si @ SBV2019 Serial 3311  
Permanent link to this record
 

 
Author Domicele Jonauskaite; Lucia Camenzind; C. Alejandro Parraga; Cecile N Diouf; Mathieu Mercapide Ducommun; Lauriane Müller; Melanie Norberg; Christine Mohr edit  url
doi  openurl
Title Colour-emotion associations in individuals with red-green colour blindness Type Journal Article
Year 2021 Publication PeerJ Abbreviated Journal  
Volume 9 Issue Pages e11180  
Keywords Affect; Chromotherapy; Colour cognition; Colour vision deficiency; Cross-modal correspondences; Daltonism; Deuteranopia; Dichromatic; Emotion; Protanopia.  
Abstract Colours and emotions are associated in languages and traditions. Some of us may convey sadness by saying feeling blue or by wearing black clothes at funerals. The first example is a conceptual experience of colour and the second example is an immediate perceptual experience of colour. To investigate whether one or the other type of experience more strongly drives colour-emotion associations, we tested 64 congenitally red-green colour-blind men and 66 non-colour-blind men. All participants associated 12 colours, presented as terms or patches, with 20 emotion concepts, and rated intensities of the associated emotions. We found that colour-blind and non-colour-blind men associated similar emotions with colours, irrespective of whether colours were conveyed via terms (r = .82) or patches (r = .80). The colour-emotion associations and the emotion intensities were not modulated by participants' severity of colour blindness. Hinting at some additional, although minor, role of actual colour perception, the consistencies in associations for colour terms and patches were higher in non-colour-blind than colour-blind men. Together, these results suggest that colour-emotion associations in adults do not require immediate perceptual colour experiences, as conceptual experiences are sufficient.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 600.120; 600.128 Approved no  
Call Number Admin @ si @ JCP2021 Serial 3564  
Permanent link to this record
 

 
Author Trevor Canham; Javier Vazquez; Elise Mathieu; Marcelo Bertalmío edit   pdf
url  doi
openurl 
Title Matching visual induction effects on screens of different size Type Journal Article
Year 2021 Publication Journal of Vision Abbreviated Journal JOV  
Volume 21 Issue 6(10) Pages 1-22  
Keywords  
Abstract In the film industry, the same movie is expected to be watched on displays of vastly different sizes, from cinema screens to mobile phones. But visual induction, the perceptual phenomenon by which the appearance of a scene region is affected by its surroundings, will be different for the same image shown on two displays of different dimensions. This phenomenon presents a practical challenge for the preservation of the artistic intentions of filmmakers, because it can lead to shifts in image appearance between viewing destinations. In this work, we show that a neural field model based on the efficient representation principle is able to predict induction effects and how, by regularizing its associated energy functional, the model is still able to represent induction but is now invertible. From this finding, we propose a method to preprocess an image in a screen–size dependent way so that its perception, in terms of visual induction, may remain constant across displays of different size. The potential of the method is demonstrated through psychophysical experiments on synthetic images and qualitative examples on natural images.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ CVM2021 Serial 3595  
Permanent link to this record
 

 
Author Yasuko Sugito; Trevor Canham; Javier Vazquez; Marcelo Bertalmio edit  url
doi  openurl
Title A Study of Objective Quality Metrics for HLG-Based HDR/WCG Image Coding Type Journal
Year 2021 Publication SMPTE Motion Imaging Journal Abbreviated Journal SMPTE  
Volume 130 Issue 4 Pages 53 - 65  
Keywords  
Abstract In this work, we study the suitability of high dynamic range, wide color gamut (HDR/WCG) objective quality metrics to assess the perceived deterioration of compressed images encoded using the hybrid log-gamma (HLG) method, which is the standard for HDR television. Several image quality metrics have been developed to deal specifically with HDR content, although in previous work we showed that the best results (i.e., better matches to the opinion of human expert observers) are obtained by an HDR metric that consists simply in applying a given standard dynamic range metric, called visual information fidelity (VIF), directly to HLG-encoded images. However, all these HDR metrics ignore the chroma components for their calculations, that is, they consider only the luminance channel. For this reason, in the current work, we conduct subjective evaluation experiments in a professional setting using compressed HDR/WCG images encoded with HLG and analyze the ability of the best HDR metric to detect perceivable distortions in the chroma components, as well as the suitability of popular color metrics (including ΔITPR , which supports parameters for HLG) to correlate with the opinion scores. Our first contribution is to show that there is a need to consider the chroma components in HDR metrics, as there are color distortions that subjects perceive but that the best HDR metric fails to detect. Our second contribution is the surprising result that VIF, which utilizes only the luminance channel, correlates much better with the subjective evaluation scores than the metrics investigated that do consider the color components.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number SCV2021 Serial 3671  
Permanent link to this record
 

 
Author Marcos V Conde; Javier Vazquez; Michael S Brown; Radu TImofte edit   pdf
url  openurl
Title NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement Type Conference Article
Year 2024 Publication 38th AAAI Conference on Artificial Intelligence Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract 3D lookup tables (3D LUTs) are a key component for image enhancement. Modern image signal processors (ISPs) have dedicated support for these as part of the camera rendering pipeline. Cameras typically provide multiple options for picture styles, where each style is usually obtained by applying a unique handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is required. For this reason and other implementation limitations, their use on mobile devices is less popular. In this work, we propose a Neural Implicit LUT (NILUT), an implicitly defined continuous 3D color transformation parameterized by a neural network. We show that NILUTs are capable of accurately emulating real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles into a single network with the ability to blend styles implicitly. Our novel approach is memory-efficient, controllable and can complement previous methods, including learned ISPs.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference AAAI  
Notes CIC; MACO Approved no  
Call Number Admin @ si @ CVB2024 Serial 3872  
Permanent link to this record
 

 
Author Danna Xue; Javier Vazquez; Luis Herranz; Yang Zhang; Michael S Brown edit  url
openurl 
Title Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring Type Journal Article
Year 2023 Publication Computer Graphics Forum Abbreviated Journal CGF  
Volume Issue Pages  
Keywords  
Abstract Achieving visually consistent colors across multiple images is important when images are used in photo albums, websites, and brochures. Unfortunately, only a handful of methods address multi-image color consistency compared to one-to-one color transfer techniques. Furthermore, existing methods do not incorporate high-level features that can assist graphic designers in their work. To address these limitations, we introduce a framework that builds upon a previous palette-based color consistency method and incorporates three high-level features: white balance, saliency, and color naming. We show how these features overcome the limitations of the prior multi-consistency workflow and showcase the user-friendly nature of our framework.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; MACO Approved no  
Call Number Admin @ si @ XVH2023 Serial 3883  
Permanent link to this record
 

 
Author Jaykishan Patel; Alban Flachot; Javier Vazquez; David H. Brainard; Thomas S. A. Wallis; Marcus A. Brubaker; Richard F. Murray edit  url
openurl 
Title A deep convolutional neural network trained to infer surface reflectance is deceived by mid-level lightness illusions Type Journal Article
Year 2023 Publication Journal of Vision Abbreviated Journal JV  
Volume 23 Issue 9 Pages 4817-4817  
Keywords  
Abstract A long-standing view is that lightness illusions are by-products of strategies employed by the visual system to stabilize its perceptual representation of surface reflectance against changes in illumination. Computationally, one such strategy is to infer reflectance from the retinal image, and to base the lightness percept on this inference. CNNs trained to infer reflectance from images have proven successful at solving this problem under limited conditions. To evaluate whether these CNNs provide suitable starting points for computational models of human lightness perception, we tested a state-of-the-art CNN on several lightness illusions, and compared its behaviour to prior measurements of human performance. We trained a CNN (Yu & Smith, 2019) to infer reflectance from luminance images. The network had a 30-layer hourglass architecture with skip connections. We trained the network via supervised learning on 100K images, rendered in Blender, each showing randomly placed geometric objects (surfaces, cubes, tori, etc.), with random Lambertian reflectance patterns (solid, Voronoi, or low-pass noise), under randomized point+ambient lighting. The renderer also provided the ground-truth reflectance images required for training. After training, we applied the network to several visual illusions. These included the argyle, Koffka-Adelson, snake, White’s, checkerboard assimilation, and simultaneous contrast illusions, along with their controls where appropriate. The CNN correctly predicted larger illusions in the argyle, Koffka-Adelson, and snake images than in their controls. It also correctly predicted an assimilation effect in White's illusion. It did not, however, account for the checkerboard assimilation or simultaneous contrast effects. These results are consistent with the view that at least some lightness phenomena are by-products of a rational approach to inferring stable representations of physical properties from intrinsically ambiguous retinal images. Furthermore, they suggest that CNN models may be a promising starting point for new models of human lightness perception.  
Address (up)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes MACO; CIC Approved no  
Call Number Admin @ si @ PFV2023 Serial 3890  
Permanent link to this record