|   | 
Details
   web
Records Links
Author Javier Vazquez; Maria Vanrell; Ramon Baldrich edit  openurl
Title Towards a Psychophysical Evaluation of Colour Constancy Algorithms Type Conference Article
Year 2008 Publication 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings Abbreviated Journal  
Volume Issue Pages 372–377  
Keywords  
Abstract  
Address (up) Terrassa (Spain)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CGIV08  
Notes CAT;CIC Approved no  
Call Number CAT @ cat @ VVB2008a Serial 968  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Robert Benavente; Maria Vanrell; Ramon Baldrich edit  openurl
Title Modelling Inter-Colour Regions of Colour Naming Space Type Conference Article
Year 2008 Publication 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings Abbreviated Journal  
Volume Issue Pages 218–222  
Keywords  
Abstract  
Address (up) Terrassa (Spain)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CGIV08  
Notes CAT;CIC Approved no  
Call Number CAT @ cat @ PBV2008 Serial 969  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; S. Sancho; Ramon Baldrich; Robert Benavente; Maria Vanrell edit   pdf
url  openurl
Title Color-based data augmentation for Reflectance Estimation Type Conference Article
Year 2018 Publication 26th Color Imaging Conference Abbreviated Journal  
Volume Issue Pages 284-289  
Keywords  
Abstract Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.  
Address (up) Vancouver; November 2018  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CIC  
Notes CIC Approved no  
Call Number Admin @ si @ SSB2018a Serial 3129  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit   pdf
openurl 
Title Color representation in CNNs: parallelisms with biological vision Type Conference Article
Year 2017 Publication ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN
[2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions
of them to quantify color tuning properties of artificial neurons to provide a classification of the network population.
We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first
layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT).
 
Address (up) Venice; Italy; October 2017  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ICCV-MBCC  
Notes CIC; 600.087; 600.051 Approved no  
Call Number Admin @ si @ RaV2017 Serial 2984  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell; Dimitris Samaras edit   pdf
openurl 
Title Light Direction and Color Estimation from Single Image with Deep Regression Type Conference Article
Year 2020 Publication London Imaging Conference Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes.  
Address (up) Virtual; September 2020  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference LIM  
Notes CIC; 600.118; 600.140; Approved no  
Call Number Admin @ si @ SBV2020 Serial 3460  
Permanent link to this record
 

 
Author Sagnik Das; Hassan Ahmed Sial; Ke Ma; Ramon Baldrich; Maria Vanrell; Dimitris Samaras edit   pdf
openurl 
Title Intrinsic Decomposition of Document Images In-the-Wild Type Conference Article
Year 2020 Publication 31st British Machine Vision Conference Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract Automatic document content processing is affected by artifacts caused by the shape
of the paper, non-uniform and diverse color of lighting conditions. Fully-supervised
methods on real data are impossible due to the large amount of data needed. Hence, the
current state of the art deep learning models are trained on fully or partially synthetic images. However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models. In this paper we tackle these problems with our two main contributions. First, a physically constrained learning-based method that directly estimates document reflectance based on intrinsic image formation which generalizes to challenging illumination conditions. Second, a new dataset that clearly improves previous synthetic ones, by adding a large range of realistic shading and diverse multi-illuminant conditions, uniquely customized to deal with documents in-the-wild. The proposed architecture works in two steps. First, a white balancing module neutralizes the color of the illumination on the input image. Based on the proposed multi-illuminant dataset we achieve a good white-balancing in really difficult conditions. Second, the shading separation module accurately disentangles the shading and paper material in a self-supervised manner where only the synthetic texture is used as a weak training signal (obviating the need for very costly ground truth with disentangled versions of shading and reflectance). The proposed approach leads to significant generalization of document reflectance estimation in real scenes with challenging illumination. We extensively evaluate on the real benchmark datasets available for intrinsic image decomposition and document shadow removal tasks. Our reflectance estimation scheme, when used as a pre-processing step of an OCR pipeline, shows a 21% improvement of character error rate (CER), thus, proving the practical applicability. The data and code will be available at: https://github.com/cvlab-stonybrook/DocIIW.
 
Address (up) Virtual; September 2020  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference BMVC  
Notes CIC; 600.087; 600.140; 600.118 Approved no  
Call Number Admin @ si @ DSM2020 Serial 3461  
Permanent link to this record