|   | 
Details
   web
Records Links
Author M. Danelljan; Fahad Shahbaz Khan; Michael Felsberg; Joost Van de Weijer edit   pdf
doi  openurl
Title Adaptive color attributes for real-time visual tracking Type Conference Article
Year 2014 Publication 27th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 1090 - 1097  
Keywords  
Abstract Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object
recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally
efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.
This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional
variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms
state-of-the-art tracking methods while running at more than 100 frames per second.
 
Address Nottingham; UK; September 2014  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CVPR  
Notes CIC; LAMP; 600.074; 600.079 Approved no  
Call Number Admin @ si @ DKF2014 Serial 2509  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Shida Beigpour; Joost Van de Weijer; Michael Felsberg edit  doi
openurl 
Title Painting-91: A Large Scale Database for Computational Painting Categorization Type Journal Article
Year 2014 Publication Machine Vision and Applications Abbreviated Journal MVAP  
Volume 25 Issue 6 Pages 1385-1397  
Keywords  
Abstract Computer analysis of visual art, especially paintings, is an interesting cross-disciplinary research domain. Most of the research in the analysis of paintings involve medium to small range datasets with own specific settings. Interestingly, significant progress has been made in the field of object and scene recognition lately. A key factor in this success is the introduction and availability of benchmark datasets for evaluation. Surprisingly, such a benchmark setup is still missing in the area of computational painting categorization. In this work, we propose a novel large scale dataset of digital paintings. The dataset consists of paintings from 91 different painters. We further show three applications of our dataset namely: artist categorization, style classification and saliency detection. We investigate how local and global features popular in image classification perform for the tasks of artist and style categorization. For both categorization tasks, our experimental results suggest that combining multiple features significantly improves the final performance. We show that state-of-the-art computer vision methods can correctly classify 50 % of unseen paintings to its painter in a large dataset and correctly attribute its artistic style in over 60 % of the cases. Additionally, we explore the task of saliency detection on paintings and show experimental findings using state-of-the-art saliency estimation algorithms.  
Address  
Corporate Author Thesis  
Publisher Springer Berlin Heidelberg Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0932-8092 ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 600.074; 600.079 Approved no  
Call Number Admin @ si @ KBW2014 Serial 2510  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Dimosthenis Karatzas; Sophie Wuerger edit   pdf
url  doi
openurl 
Title Limitations of visual gamma corrections in LCD displays Type Journal Article
Year 2014 Publication Displays Abbreviated Journal Dis  
Volume 35 Issue 5 Pages 227–239  
Keywords Display calibration; Psychophysics; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration  
Abstract A method for estimating the non-linear gamma transfer function of liquid–crystal displays (LCDs) without the need of a photometric measurement device was described by Xiao et al. (2011) [1]. It relies on observer’s judgments of visual luminance by presenting eight half-tone patterns with luminances from 1/9 to 8/9 of the maximum value of each colour channel. These half-tone patterns were distributed over the screen both over the vertical and horizontal viewing axes. We conducted a series of photometric and psychophysical measurements (consisting in the simultaneous presentation of half-tone patterns in each trial) to evaluate whether the angular dependency of the light generated by three different LCD technologies would bias the results of these gamma transfer function estimations. Our results show that there are significant differences between the gamma transfer functions measured and produced by observers at different viewing angles. We suggest appropriate modifications to the Xiao et al. paradigm to counterbalance these artefacts which also have the advantage of shortening the amount of time spent in collecting the psychophysical measurements.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; DAG; 600.052; 600.077; 600.074 Approved no  
Call Number Admin @ si @ PRK2014 Serial 2511  
Permanent link to this record
 

 
Author Ricard Balague edit  openurl
Title Exploring the combination of color cues for intrinsic image decomposition Type Report
Year 2014 Publication CVC Technical Report Abbreviated Journal  
Volume 178 Issue Pages  
Keywords  
Abstract Intrinsic image decomposition is a challenging problem that consists in separating an image into its physical characteristics: reflectance and shading. This problem can be solved in different ways, but most methods have combined information from several visual cues. In this work we describe an extension of an existing method proposed by Serra et al. which considers two color descriptors and combines them by means of a Markov Random Field. We analyze in depth the weak points of the method and we explore more possibilities to use in both descriptors. The proposed extension depends on the combination of the cues considered to overcome some of the limitations of the original method. Our approach is tested on the MIT dataset and Beigpour et al. dataset, which contain images of real objects acquired under controlled conditions and synthetic images respectively, with their corresponding ground truth.  
Address UAB; September 2014  
Corporate Author Thesis Master's thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.074 Approved no  
Call Number Admin @ si @ Bal2014 Serial 2579  
Permanent link to this record
 

 
Author Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich edit  doi
isbn  openurl
Title DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition Type Conference Article
Year 2015 Publication Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II Abbreviated Journal  
Volume 9475 Issue Pages 463-473  
Keywords Projector-camera systems; Feature descriptors; Object recognition  
Abstract Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection.  
Address  
Corporate Author Thesis  
Publisher Springer International Publishing Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title LNCS  
Series Volume Series Issue Edition  
ISSN 0302-9743 ISBN 978-3-319-27862-9 Medium  
Area Expedition Conference ISVC  
Notes CIC Approved no  
Call Number Admin @ si @ SMG2015 Serial 2736  
Permanent link to this record
 

 
Author Ivet Rafegas; Javier Vazquez; Robert Benavente; Maria Vanrell; Susana Alvarez edit  url
openurl 
Title Enhancing spatio-chromatic representation with more-than-three color coding for image description Type Journal Article
Year 2017 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
Volume 34 Issue 5 Pages 827-837  
Keywords  
Abstract Extraction of spatio-chromatic features from color images is usually performed independently on each color channel. Usual 3D color spaces, such as RGB, present a high inter-channel correlation for natural images. This correlation can be reduced using color-opponent representations, but the spatial structure of regions with small color differences is not fully captured in two generic Red-Green and Blue-Yellow channels. To overcome these problems, we propose a new color coding that is adapted to the specific content of each image. Our proposal is based on two steps: (a) setting the number of channels to the number of distinctive colors we find in each image (avoiding the problem of channel correlation), and (b) building a channel representation that maximizes contrast differences within each color channel (avoiding the problem of low local contrast). We call this approach more-than-three color coding (MTT) to enhance the fact that the number of channels is adapted to the image content. The higher color complexity an image has, the more channels can be used to represent it. Here we select distinctive colors as the most predominant in the image, which we call color pivots, and we build the new color coding using these color pivots as a basis. To evaluate the proposed approach we measure its efficiency in an image categorization task. We show how a generic descriptor improves its performance at the description level when applied on the MTT coding.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.087 Approved no  
Call Number Admin @ si @ RVB2017 Serial 2892  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit   pdf
openurl 
Title Color spaces emerging from deep convolutional networks Type Conference Article
Year 2016 Publication 24th Color and Imaging Conference Abbreviated Journal  
Volume Issue Pages 225-230  
Keywords  
Abstract Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers.
 
Address San Diego; USA; November 2016  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CIC  
Notes CIC Approved no  
Call Number Admin @ si @ RaV2016a Serial 2894  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit  openurl
Title Colour Visual Coding in trained Deep Neural Networks Type Abstract
Year 2016 Publication European Conference on Visual Perception Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract  
Address Barcelona; Spain; August 2016  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ECVP  
Notes CIC Approved no  
Call Number Admin @ si @ RaV2016b Serial 2895  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit   pdf
openurl 
Title Color representation in CNNs: parallelisms with biological vision Type Conference Article
Year 2017 Publication ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN
[2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions
of them to quantify color tuning properties of artificial neurons to provide a classification of the network population.
We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first
layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT).
 
Address Venice; Italy; October 2017  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ICCV-MBCC  
Notes CIC; 600.087; 600.051 Approved no  
Call Number Admin @ si @ RaV2017 Serial 2984  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit   pdf
url  doi
openurl 
Title Color encoding in biologically-inspired convolutional neural networks Type Journal Article
Year 2018 Publication Vision Research Abbreviated Journal VR  
Volume 151 Issue Pages 7-17  
Keywords Color coding; Computer vision; Deep learning; Convolutional neural networks  
Abstract Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.051; 600.087 Approved no  
Call Number Admin @ si @RaV2018 Serial 3114  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; S. Sancho; Ramon Baldrich; Robert Benavente; Maria Vanrell edit   pdf
url  openurl
Title Color-based data augmentation for Reflectance Estimation Type Conference Article
Year 2018 Publication 26th Color Imaging Conference Abbreviated Journal  
Volume Issue Pages 284-289  
Keywords  
Abstract Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.  
Address Vancouver; November 2018  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CIC  
Notes CIC Approved no  
Call Number Admin @ si @ SSB2018a Serial 3129  
Permanent link to this record
 

 
Author Bojana Gajic; Ariel Amato; Ramon Baldrich; Carlo Gatta edit   pdf
openurl 
Title Bag of Negatives for Siamese Architectures Type Conference Article
Year 2019 Publication 30th British Machine Vision Conference Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract Training a Siamese architecture for re-identification with a large number of identities is a challenging task due to the difficulty of finding relevant negative samples efficiently. In this work we present Bag of Negatives (BoN), a method for accelerated and improved training of Siamese networks that scales well on datasets with a very large number of identities. BoN is an efficient and loss-independent method, able to select a bag of high quality negatives, based on a novel online hashing strategy.  
Address Cardiff; United Kingdom; September 2019  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference BMVC  
Notes CIC; 600.140; 600.118 Approved no  
Call Number Admin @ si @ GAB2019b Serial 3263  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell; Luis A Alexandre; G. Arias edit   pdf
url  openurl
Title Understanding trained CNNs by indexing neuron selectivity Type Journal Article
Year 2020 Publication Pattern Recognition Letters Abbreviated Journal PRL  
Volume 136 Issue Pages 318-325  
Keywords  
Abstract The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.087; 600.140; 600.118 Approved no  
Call Number Admin @ si @ RVL2019 Serial 3310  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell edit   pdf
url  openurl
Title Deep intrinsic decomposition trained on surreal scenes yet with realistic light effects Type Journal Article
Year 2020 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
Volume 37 Issue 1 Pages 1-15  
Keywords  
Abstract Estimation of intrinsic images still remains a challenging task due to weaknesses of ground-truth datasets, which either are too small or present non-realistic issues. On the other hand, end-to-end deep learning architectures start to achieve interesting results that we believe could be improved if important physical hints were not ignored. In this work, we present a twofold framework: (a) a flexible generation of images overcoming some classical dataset problems such as larger size jointly with coherent lighting appearance; and (b) a flexible architecture tying physical properties through intrinsic losses. Our proposal is versatile, presents low computation time, and achieves state-of-the-art results.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.140; 600.12; 600.118 Approved no  
Call Number Admin @ si @ SBV2019 Serial 3311  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell; Dimitris Samaras edit   pdf
openurl 
Title Light Direction and Color Estimation from Single Image with Deep Regression Type Conference Article
Year 2020 Publication London Imaging Conference Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes.  
Address Virtual; September 2020  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference LIM  
Notes CIC; 600.118; 600.140; Approved no  
Call Number Admin @ si @ SBV2020 Serial 3460  
Permanent link to this record
 

 
Author Sagnik Das; Hassan Ahmed Sial; Ke Ma; Ramon Baldrich; Maria Vanrell; Dimitris Samaras edit   pdf
openurl 
Title Intrinsic Decomposition of Document Images In-the-Wild Type Conference Article
Year 2020 Publication 31st British Machine Vision Conference Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract Automatic document content processing is affected by artifacts caused by the shape
of the paper, non-uniform and diverse color of lighting conditions. Fully-supervised
methods on real data are impossible due to the large amount of data needed. Hence, the
current state of the art deep learning models are trained on fully or partially synthetic images. However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models. In this paper we tackle these problems with our two main contributions. First, a physically constrained learning-based method that directly estimates document reflectance based on intrinsic image formation which generalizes to challenging illumination conditions. Second, a new dataset that clearly improves previous synthetic ones, by adding a large range of realistic shading and diverse multi-illuminant conditions, uniquely customized to deal with documents in-the-wild. The proposed architecture works in two steps. First, a white balancing module neutralizes the color of the illumination on the input image. Based on the proposed multi-illuminant dataset we achieve a good white-balancing in really difficult conditions. Second, the shading separation module accurately disentangles the shading and paper material in a self-supervised manner where only the synthetic texture is used as a weak training signal (obviating the need for very costly ground truth with disentangled versions of shading and reflectance). The proposed approach leads to significant generalization of document reflectance estimation in real scenes with challenging illumination. We extensively evaluate on the real benchmark datasets available for intrinsic image decomposition and document shadow removal tasks. Our reflectance estimation scheme, when used as a pre-processing step of an OCR pipeline, shows a 21% improvement of character error rate (CER), thus, proving the practical applicability. The data and code will be available at: https://github.com/cvlab-stonybrook/DocIIW.
 
Address Virtual; September 2020  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference BMVC  
Notes CIC; 600.087; 600.140; 600.118 Approved no  
Call Number Admin @ si @ DSM2020 Serial 3461  
Permanent link to this record
 

 
Author Domicele Jonauskaite; Lucia Camenzind; C. Alejandro Parraga; Cecile N Diouf; Mathieu Mercapide Ducommun; Lauriane Müller; Melanie Norberg; Christine Mohr edit  url
doi  openurl
Title Colour-emotion associations in individuals with red-green colour blindness Type Journal Article
Year 2021 Publication PeerJ Abbreviated Journal  
Volume 9 Issue Pages e11180  
Keywords Affect; Chromotherapy; Colour cognition; Colour vision deficiency; Cross-modal correspondences; Daltonism; Deuteranopia; Dichromatic; Emotion; Protanopia.  
Abstract Colours and emotions are associated in languages and traditions. Some of us may convey sadness by saying feeling blue or by wearing black clothes at funerals. The first example is a conceptual experience of colour and the second example is an immediate perceptual experience of colour. To investigate whether one or the other type of experience more strongly drives colour-emotion associations, we tested 64 congenitally red-green colour-blind men and 66 non-colour-blind men. All participants associated 12 colours, presented as terms or patches, with 20 emotion concepts, and rated intensities of the associated emotions. We found that colour-blind and non-colour-blind men associated similar emotions with colours, irrespective of whether colours were conveyed via terms (r = .82) or patches (r = .80). The colour-emotion associations and the emotion intensities were not modulated by participants' severity of colour blindness. Hinting at some additional, although minor, role of actual colour perception, the consistencies in associations for colour terms and patches were higher in non-colour-blind than colour-blind men. Together, these results suggest that colour-emotion associations in adults do not require immediate perceptual colour experiences, as conceptual experiences are sufficient.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 600.120; 600.128 Approved no  
Call Number Admin @ si @ JCP2021 Serial 3564  
Permanent link to this record
 

 
Author Trevor Canham; Javier Vazquez; Elise Mathieu; Marcelo Bertalmío edit   pdf
url  doi
openurl 
Title Matching visual induction effects on screens of different size Type Journal Article
Year 2021 Publication Journal of Vision Abbreviated Journal JOV  
Volume 21 Issue 6(10) Pages 1-22  
Keywords  
Abstract In the film industry, the same movie is expected to be watched on displays of vastly different sizes, from cinema screens to mobile phones. But visual induction, the perceptual phenomenon by which the appearance of a scene region is affected by its surroundings, will be different for the same image shown on two displays of different dimensions. This phenomenon presents a practical challenge for the preservation of the artistic intentions of filmmakers, because it can lead to shifts in image appearance between viewing destinations. In this work, we show that a neural field model based on the efficient representation principle is able to predict induction effects and how, by regularizing its associated energy functional, the model is still able to represent induction but is now invertible. From this finding, we propose a method to preprocess an image in a screen–size dependent way so that its perception, in terms of visual induction, may remain constant across displays of different size. The potential of the method is demonstrated through psychophysical experiments on synthetic images and qualitative examples on natural images.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor (down)  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ CVM2021 Serial 3595  
Permanent link to this record