Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–13] |
Records | Links | |||||
---|---|---|---|---|---|---|
Author | Ivet Rafegas; Maria Vanrell |
|
||||
Title | Color spaces emerging from deep convolutional networks | Type | Conference Article | |||
Year | 2016 | Publication | 24th Color and Imaging Conference | Abbreviated Journal | ||
Volume | Issue | Pages | 225-230 | |||
Keywords | ||||||
Abstract | Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers. |
|||||
Address | San Diego; USA; November 2016 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CIC | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ RaV2016a | Serial | 2894 | |||
Permanent link to this record | ||||||
Author | Ivet Rafegas; Maria Vanrell |
|
||||
Title | Colour Visual Coding in trained Deep Neural Networks | Type | Abstract | |||
Year | 2016 | Publication | European Conference on Visual Perception | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | ||||||
Address | Barcelona; Spain; August 2016 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ECVP | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ RaV2016b | Serial | 2895 | |||
Permanent link to this record | ||||||
Author | Ivet Rafegas; Maria Vanrell |
|
||||
Title | Color representation in CNNs: parallelisms with biological vision | Type | Conference Article | |||
Year | 2017 | Publication | ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN [2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions of them to quantify color tuning properties of artificial neurons to provide a classification of the network population. We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT). |
|||||
Address | Venice; Italy; October 2017 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ICCV-MBCC | |||
Notes | CIC; 600.087; 600.051 | Approved | no | |||
Call Number | Admin @ si @ RaV2017 | Serial | 2984 | |||
Permanent link to this record | ||||||
Author | Ivet Rafegas |
|
||||
Title | Color in Visual Recognition: from flat to deep representations and some biological parallelisms | Type | Book Whole | |||
Year | 2017 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | Visual recognition is one of the main problems in computer vision that attempts to solve image understanding by deciding what objects are in images. This problem can be computationally solved by using relevant sets of visual features, such as edges, corners, color or more complex object parts. This thesis contributes to how color features have to be represented for recognition tasks.
Image features can be extracted following two different approaches. A first approach is defining handcrafted descriptors of images which is then followed by a learning scheme to classify the content (named flat schemes in Kruger et al. (2013). In this approach, perceptual considerations are habitually used to define efficient color features. Here we propose a new flat color descriptor based on the extension of color channels to boost the representation of spatio-chromatic contrast that surpasses state-of-the-art approaches. However, flat schemes present a lack of generality far away from the capabilities of biological systems. A second approach proposes evolving these flat schemes into a hierarchical process, like in the visual cortex. This includes an automatic process to learn optimal features. These deep schemes, and more specifically Convolutional Neural Networks (CNNs), have shown an impressive performance to solve various vision problems. However, there is a lack of understanding about the internal representation obtained, as a result of automatic learning. In this thesis we propose a new methodology to explore the internal representation of trained CNNs by defining the Neuron Feature as a visualization of the intrinsic features encoded in each individual neuron. Additionally, and inspired by physiological techniques, we propose to compute different neuron selectivity indexes (e.g., color, class, orientation or symmetry, amongst others) to label and classify the full CNN neuron population to understand learned representations. Finally, using the proposed methodology, we show an in-depth study on how color is represented on a specific CNN, trained for object recognition, that competes with primate representational abilities (Cadieu et al (2014)). We found several parallelisms with biological visual systems: (a) a significant number of color selectivity neurons throughout all the layers; (b) an opponent and low frequency representation of color oriented edges and a higher sampling of frequency selectivity in brightness than in color in 1st layer like in V1; (c) a higher sampling of color hue in the second layer aligned to observed hue maps in V2; (d) a strong color and shape entanglement in all layers from basic features in shallower layers (V1 and V2) to object and background shapes in deeper layers (V4 and IT); and (e) a strong correlation between neuron color selectivities and color dataset bias. |
|||||
Address | November 2017 | |||||
Corporate Author | Thesis | Ph.D. thesis | ||||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Maria Vanrell | ||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 978-84-945373-7-0 | Medium | |||
Area | Expedition | Conference | ||||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ Raf2017 | Serial | 3100 | |||
Permanent link to this record | ||||||
Author | Bojana Gajic; Ariel Amato; Ramon Baldrich; Carlo Gatta |
|
||||
Title | Bag of Negatives for Siamese Architectures | Type | Conference Article | |||
Year | 2019 | Publication | 30th British Machine Vision Conference | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | Training a Siamese architecture for re-identification with a large number of identities is a challenging task due to the difficulty of finding relevant negative samples efficiently. In this work we present Bag of Negatives (BoN), a method for accelerated and improved training of Siamese networks that scales well on datasets with a very large number of identities. BoN is an efficient and loss-independent method, able to select a bag of high quality negatives, based on a novel online hashing strategy. | |||||
Address | Cardiff; United Kingdom; September 2019 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | BMVC | |||
Notes | CIC; 600.140; 600.118 | Approved | no | |||
Call Number | Admin @ si @ GAB2019b | Serial | 3263 | |||
Permanent link to this record | ||||||
Author | Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell; Dimitris Samaras |
|
||||
Title | Light Direction and Color Estimation from Single Image with Deep Regression | Type | Conference Article | |||
Year | 2020 | Publication | London Imaging Conference | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes. | |||||
Address | Virtual; September 2020 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | LIM | |||
Notes | CIC; 600.118; 600.140; | Approved | no | |||
Call Number | Admin @ si @ SBV2020 | Serial | 3460 | |||
Permanent link to this record | ||||||
Author | Sagnik Das; Hassan Ahmed Sial; Ke Ma; Ramon Baldrich; Maria Vanrell; Dimitris Samaras |
|
||||
Title | Intrinsic Decomposition of Document Images In-the-Wild | Type | Conference Article | |||
Year | 2020 | Publication | 31st British Machine Vision Conference | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | Automatic document content processing is affected by artifacts caused by the shape
of the paper, non-uniform and diverse color of lighting conditions. Fully-supervised methods on real data are impossible due to the large amount of data needed. Hence, the current state of the art deep learning models are trained on fully or partially synthetic images. However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models. In this paper we tackle these problems with our two main contributions. First, a physically constrained learning-based method that directly estimates document reflectance based on intrinsic image formation which generalizes to challenging illumination conditions. Second, a new dataset that clearly improves previous synthetic ones, by adding a large range of realistic shading and diverse multi-illuminant conditions, uniquely customized to deal with documents in-the-wild. The proposed architecture works in two steps. First, a white balancing module neutralizes the color of the illumination on the input image. Based on the proposed multi-illuminant dataset we achieve a good white-balancing in really difficult conditions. Second, the shading separation module accurately disentangles the shading and paper material in a self-supervised manner where only the synthetic texture is used as a weak training signal (obviating the need for very costly ground truth with disentangled versions of shading and reflectance). The proposed approach leads to significant generalization of document reflectance estimation in real scenes with challenging illumination. We extensively evaluate on the real benchmark datasets available for intrinsic image decomposition and document shadow removal tasks. Our reflectance estimation scheme, when used as a pre-processing step of an OCR pipeline, shows a 21% improvement of character error rate (CER), thus, proving the practical applicability. The data and code will be available at: https://github.com/cvlab-stonybrook/DocIIW. |
|||||
Address | Virtual; September 2020 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | BMVC | |||
Notes | CIC; 600.087; 600.140; 600.118 | Approved | no | |||
Call Number | Admin @ si @ DSM2020 | Serial | 3461 | |||
Permanent link to this record | ||||||
Author | Hassan Ahmed Sial |
|
||||
Title | Estimating Light Effects from a Single Image: Deep Architectures and Ground-Truth Generation | Type | Book Whole | |||
Year | 2021 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | In this thesis, we explore how to estimate the effects of the light interacting with the scene objects from a single image. To achieve this goal, we focus on recovering intrinsic components like reflectance, shading, or light properties such as color and position using deep architectures. The success of these approaches relies on training on large and diversified image datasets. Therefore, we present several contributions on this such as: (a) a data-augmentation technique; (b) a ground-truth for an existing multi-illuminant dataset; (c) a family of synthetic datasets, SID for Surreal Intrinsic Datasets, with diversified backgrounds and coherent light conditions; and (d) a practical pipeline to create hybrid ground-truths to overcome the complexity of acquiring realistic light conditions in a massive way. In parallel with the creation of datasets, we trained different flexible encoder-decoder deep architectures incorporating physical constraints from the image formation models.
In the last part of the thesis, we apply all the previous experience to two different problems. Firstly, we create a large hybrid Doc3DShade dataset with real shading and synthetic reflectance under complex illumination conditions, that is used to train a two-stage architecture that improves the character recognition task in complex lighting conditions of unwrapped documents. Secondly, we tackle the problem of single image scene relighting by extending both, the SID dataset to present stronger shading and shadows effects, and the deep architectures to use intrinsic components to estimate new relit images. |
|||||
Address | September 2021 | |||||
Corporate Author | Thesis | Ph.D. thesis | ||||
Publisher | IMPRIMA | Place of Publication | Editor | Maria Vanrell;Ramon Baldrich | ||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 978-84-122714-8-5 | Medium | |||
Area | Expedition | Conference | ||||
Notes | CIC; | Approved | no | |||
Call Number | Admin @ si @ Sia2021 | Serial | 3607 | |||
Permanent link to this record | ||||||
Author | Trevor Canham; Javier Vazquez; D Long; Richard F. Murray; Michael S Brown |
|
||||
Title | Noise Prism: A Novel Multispectral Visualization Technique | Type | Journal Article | |||
Year | 2021 | Publication | 31st Color and Imaging Conference | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | A novel technique for visualizing multispectral images is proposed. Inspired by how prisms work, our method spreads spectral information over a chromatic noise pattern. This is accomplished by populating the pattern with pixels representing each measurement band at a count proportional to its measured intensity. The method is advantageous because it allows for lightweight encoding and visualization of spectral information
while maintaining the color appearance of the stimulus. A four alternative forced choice (4AFC) experiment was conducted to validate the method’s information-carrying capacity in displaying metameric stimuli of varying colors and spectral basis functions. The scores ranged from 100% to 20% (less than chance given the 4AFC task), with many conditions falling somewhere in between at statistically significant intervals. Using this data, color and texture difference metrics can be evaluated and optimized to predict the legibility of the visualization technique. |
|||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CIC | |||
Notes | MACO; CIC | Approved | no | |||
Call Number | Admin @ si @ CVL2021 | Serial | 4000 | |||
Permanent link to this record | ||||||
Author | Josep M. Gonfaus; Xavier Boix; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez |
|
||||
Title | Harmony Potentials for Joint Classification and Segmentation | Type | Conference Article | |||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | ||
Volume | Issue | Pages | 3280–3287 | |||
Keywords | ||||||
Abstract | Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21. | |||||
Address | San Francisco CA, USA | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | ||
Area | Expedition | Conference | CVPR | |||
Notes | ADAS;CIC;ISE | Approved | no | |||
Call Number | ADAS @ adas @ GBW2010 | Serial | 1296 | |||
Permanent link to this record | ||||||
Author | Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin |
|
||||
Title | Towards Automatic Concept Transfer | Type | Conference Article | |||
Year | 2011 | Publication | Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering | Abbreviated Journal | ||
Volume | Issue | Pages | 167.176 | |||
Keywords | chromatic modeling, color concepts, color transfer, concept transfer | |||||
Abstract | This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study. | |||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | ACM Press | Place of Publication | Editor | |||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 978-1-4503-0907-3 | Medium | |||
Area | Expedition | Conference | NPAR | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ MSM2011 | Serial | 1866 | |||
Permanent link to this record | ||||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell |
|
||||
Title | Top-Down Color Attention for Object Recognition | Type | Conference Article | |||
Year | 2009 | Publication | 12th International Conference on Computer Vision | Abbreviated Journal | ||
Volume | Issue | Pages | 979 - 986 | |||
Keywords | ||||||
Abstract | Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information. | |||||
Address | Kyoto, Japan | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1550-5499 | ISBN | 978-1-4244-4420-5 | Medium | ||
Area | Expedition | Conference | ICCV | |||
Notes | CIC | Approved | no | |||
Call Number | CAT @ cat @ SWV2009 | Serial | 1196 | |||
Permanent link to this record | ||||||
Author | Adria Ruiz; Joost Van de Weijer; Xavier Binefa |
|
||||
Title | Regularized Multi-Concept MIL for weakly-supervised facial behavior categorization | Type | Conference Article | |||
Year | 2014 | Publication | 25th British Machine Vision Conference | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | We address the problem of estimating high-level semantic labels for videos of recorded people by means of analysing their facial expressions. This problem, to which we refer as facial behavior categorization, is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. Therefore, the goal is to learn a set of discriminative expressions and how they determine the video weak-labels. Facial behavior categorization can be posed as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. In contrast to previous approaches applied in facial behavior analysis, RMC-MIL follows a Multi-Concept assumption which allows different facial expressions (concepts) to contribute differently to the video-label. Moreover, to handle with the high-dimensional nature of facial-descriptors, RMC-MIL uses a discriminative approach to model the concepts and structured sparsity regularization to discard non-informative features. RMC-MIL is posed as a convex-constrained optimization problem where all the parameters are jointly learned using the Projected-Quasi-Newton method. In our experiments, we use two public data-sets to show the advantages of the Regularized Multi-Concept approach and its improvement compared to existing MIL methods. RMC-MIL outperforms state-of-the-art results in the UNBC data-set for pain detection. | |||||
Address | Nottingham; UK; September 2014 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | BMVC | |||
Notes | LAMP; CIC; 600.074; 600.079 | Approved | no | |||
Call Number | Admin @ si @ RWB2014 | Serial | 2508 | |||
Permanent link to this record | ||||||
Author | Joost Van de Weijer; Shida Beigpour |
|
||||
Title | The Dichromatic Reflection Model: Future Research Directions and Applications | Type | Conference Article | |||
Year | 2011 | Publication | International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | dblp | |||||
Abstract | The dichromatic reflection model (DRM) predicts that color distributions form a parallelogram in color space, whose shape is defined by the body reflectance and the illuminant color. In this paper we resume the assumptions which led to the DRM and shortly recall two of its main applications domains: color image segmentation and photometric invariant feature computation. After having introduced the model we discuss several limitations of the theory, especially those which are raised once working on real-world uncalibrated images. In addition, we summerize recent extensions of the model which allow to handle more complicated light interactions. Finally, we suggest some future research directions which would further extend its applicability. | |||||
Address | Algarve, Portugal | |||||
Corporate Author | Thesis | |||||
Publisher | SciTePress | Place of Publication | Editor | Mestetskiy, Leonid and Braz, José | ||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 978-989-8425-47-8 | Medium | |||
Area | Expedition | Conference | VISIGRAPP | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ WeB2011 | Serial | 1778 | |||
Permanent link to this record | ||||||
Author | Maria Vanrell; Naila Murray; Robert Benavente; C. Alejandro Parraga; Xavier Otazu; Ramon Baldrich |
|
||||
Title | Perception Based Representations for Computational Colour | Type | Conference Article | |||
Year | 2011 | Publication | 3rd International Workshop on Computational Color Imaging | Abbreviated Journal | ||
Volume | 6626 | Issue | Pages | 16-30 | ||
Keywords | colour perception, induction, naming, psychophysical data, saliency, segmentation | |||||
Abstract | The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space. | |||||
Address | Milan, Italy | |||||
Corporate Author | Thesis | |||||
Publisher | Springer-Verlag | Place of Publication | Editor | Raimondo Schettini, Shoji Tominaga, Alain Trémeau | ||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | LNCS | |||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 978-3-642-20403-6 | Medium | |||
Area | Expedition | Conference | CCIW | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ VMB2011 | Serial | 1733 | |||
Permanent link to this record | ||||||
Author | Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin |
|
||||
Title | Towards automatic and flexible concept transfer | Type | Journal Article | |||
Year | 2012 | Publication | Computers and Graphics | Abbreviated Journal | CG | |
Volume | 36 | Issue | 6 | Pages | 622–634 | |
Keywords | ||||||
Abstract | This paper introduces a novel approach to automatic, yet flexible, image concepttransfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The presented method modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This method is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. Our framework is flexible for two reasons. First, the user may select one of two modalities to map input image chromaticities to target concept chromaticities depending on the level of photo-realism required. Second, the user may adjust the intensity level of the concepttransfer to his/her liking with a single parameter. The proposed method uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. Results show that our approach yields transferred images which effectively represent concepts as confirmed by a user study. | |||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 0097-8493 | ISBN | Medium | |||
Area | Expedition | Conference | ||||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ MSM2012 | Serial | 2002 | |||
Permanent link to this record | ||||||
Author | Robert Benavente; C. Alejandro Parraga; Maria Vanrell |
|
||||
Title | La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas | Type | Conference Article | |||
Year | 2010 | Publication | 9th Congreso Nacional del Color | Abbreviated Journal | ||
Volume | Issue | Pages | 92–95 | |||
Keywords | Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos | |||||
Abstract | En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto. | |||||
Address | Alicante (Spain) | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 978-84-9717-144-1 | Medium | |||
Area | Expedition | Conference | CNC | |||
Notes | CIC | Approved | no | |||
Call Number | CAT @ cat @ BPV2010 | Serial | 1327 | |||
Permanent link to this record | ||||||
Author | Javier Vazquez; Maria Vanrell; Ramon Baldrich; Francesc Tous |
|
||||
Title | Color Constancy by Category Correlation | Type | Journal Article | |||
Year | 2012 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP | |
Volume | 21 | Issue | 4 | Pages | 1997-2007 | |
Keywords | ||||||
Abstract | Finding color representations which are stable to illuminant changes is still an open problem in computer vision. Until now most approaches have been based on physical constraints or statistical assumptions derived from the scene, while very little attention has been paid to the effects that selected illuminants have
on the final color image representation. The novelty of this work is to propose perceptual constraints that are computed on the corrected images. We define the category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here we choose these colors as the universal color categories related to basic linguistic terms which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy. |
|||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1057-7149 | ISBN | Medium | |||
Area | Expedition | Conference | ||||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ VVB2012 | Serial | 1999 | |||
Permanent link to this record |