|   | 
Details
   web
Records Links
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit  openurl
Title Who Painted this Painting? Type (down) Conference Article
Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
Volume Issue Pages 329–333  
Keywords  
Abstract  
Address Gjovik (Norway)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CREATE  
Notes CIC Approved no  
Call Number CAT @ cat @ KWV2010 Serial 1329  
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer edit   pdf
openurl 
Title Photo-Realistic Color Alteration for Architecture and Design Type (down) Conference Article
Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
Volume Issue Pages 84–88  
Keywords  
Abstract As color is a strong stimuli we receive from the exterior world, choosing the right color can prove crucial in creating the desired architecture and desing. We propose a framework to apply a realistic color change on both objects and their illuminant lights for snapshots of architectural designs, in order to visualize and choose the right color before actully applying the change in the real world. The proposed framework is based on the laws of physics in order to accomplish realistic and physically plausible results.  
Address Gjovik (Norway)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CREATE  
Notes CIC Approved no  
Call Number CAT @ cat @ BeW2010 Serial 1330  
Permanent link to this record
 

 
Author Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu edit  doi
isbn  openurl
Title Perceptual color texture codebooks for retrieving in highly diverse texture datasets Type (down) Conference Article
Year 2010 Publication 20th International Conference on Pattern Recognition Abbreviated Journal  
Volume Issue Pages 866–869  
Keywords  
Abstract Color and texture are visual cues of different nature, their integration in a useful visual descriptor is not an obvious step. One way to combine both features is to compute texture descriptors independently on each color channel. A second way is integrate the features at a descriptor level, in this case arises the problem of normalizing both cues. A significant progress in the last years in object recognition has provided the bag-of-words framework that again deals with the problem of feature combination through the definition of vocabularies of visual words. Inspired in this framework, here we present perceptual textons that will allow to fuse color and texture at the level of p-blobs, which is our feature detection step. Feature representation is based on two uniform spaces representing the attributes of the p-blobs. The low-dimensionality of these text on spaces will allow to bypass the usual problems of previous approaches. Firstly, no need for normalization between cues; and secondly, vocabularies are directly obtained from the perceptual properties of text on spaces without any learning step. Our proposal improve current state-of-art of color-texture descriptors in an image retrieval experiment over a highly diverse texture dataset from Corel.  
Address Istanbul (Turkey)  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
Area Expedition Conference ICPR  
Notes CIC Approved no  
Call Number CAT @ cat @ ASV2010b Serial 1426  
Permanent link to this record
 

 
Author Ernest Valveny; Ricardo Toledo; Ramon Baldrich; Enric Marti edit  openurl
Title Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching Type (down) Conference Article
Year 2002 Publication Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002 Abbreviated Journal  
Volume Issue Pages 502–507  
Keywords  
Abstract  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes DAG;RV;CAT;IAM;CIC;ADAS Approved no  
Call Number IAM @ iam @ VTB2002 Serial 1660  
Permanent link to this record
 

 
Author Maria Vanrell; Naila Murray; Robert Benavente; C. Alejandro Parraga; Xavier Otazu; Ramon Baldrich edit   pdf
url  isbn
openurl 
Title Perception Based Representations for Computational Colour Type (down) Conference Article
Year 2011 Publication 3rd International Workshop on Computational Color Imaging Abbreviated Journal  
Volume 6626 Issue Pages 16-30  
Keywords colour perception, induction, naming, psychophysical data, saliency, segmentation  
Abstract The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.  
Address Milan, Italy  
Corporate Author Thesis  
Publisher Springer-Verlag Place of Publication Editor Raimondo Schettini, Shoji Tominaga, Alain Trémeau  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title LNCS  
Series Volume Series Issue Edition  
ISSN ISBN 978-3-642-20403-6 Medium  
Area Expedition Conference CCIW  
Notes CIC Approved no  
Call Number Admin @ si @ VMB2011 Serial 1733  
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga edit   pdf
url  doi
isbn  openurl
Title Saliency Estimation Using a Non-Parametric Low-Level Vision Model Type (down) Conference Article
Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 433-440  
Keywords Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms  
Abstract Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.  
Address Colorado Springs  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium  
Area Expedition Conference CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ MVO2011 Serial 1757  
Permanent link to this record
 

 
Author Joost Van de Weijer; Shida Beigpour edit   pdf
url  isbn
openurl 
Title The Dichromatic Reflection Model: Future Research Directions and Applications Type (down) Conference Article
Year 2011 Publication International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal  
Volume Issue Pages  
Keywords dblp  
Abstract The dichromatic reflection model (DRM) predicts that color distributions form a parallelogram in color space, whose shape is defined by the body reflectance and the illuminant color. In this paper we resume the assumptions which led to the DRM and shortly recall two of its main applications domains: color image segmentation and photometric invariant feature computation. After having introduced the model we discuss several limitations of the theory, especially those which are raised once working on real-world uncalibrated images. In addition, we summerize recent extensions of the model which allow to handle more complicated light interactions. Finally, we suggest some future research directions which would further extend its applicability.  
Address Algarve, Portugal  
Corporate Author Thesis  
Publisher SciTePress Place of Publication Editor Mestetskiy, Leonid and Braz, José  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 978-989-8425-47-8 Medium  
Area Expedition Conference VISIGRAPP  
Notes CIC Approved no  
Call Number Admin @ si @ WeB2011 Serial 1778  
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer edit   pdf
url  doi
isbn  openurl
Title Object Recoloring Based on Intrinsic Image Estimation Type (down) Conference Article
Year 2011 Publication 13th IEEE International Conference in Computer Vision Abbreviated Journal  
Volume Issue Pages 327 - 334  
Keywords  
Abstract Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods.  
Address Barcelona  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium  
Area Expedition Conference ICCV  
Notes CIC Approved no  
Call Number Admin @ si @ BeW2011 Serial 1781  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez edit   pdf
url  doi
isbn  openurl
Title Color Attributes for Object Detection Type (down) Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 3306-3313  
Keywords pedestrian detection  
Abstract State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
 
Address Providence; Rhode Island; USA;  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference CVPR  
Notes ADAS; CIC; Approved no  
Call Number Admin @ si @ KRW2012 Serial 1935  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell edit   pdf
url  openurl
Title Portmanteau Vocabularies for Multi-Cue Image Representation Type (down) Conference Article
Year 2011 Publication 25th Annual Conference on Neural Information Processing Systems Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference NIPS  
Notes CIC Approved no  
Call Number Admin @ si @ KWB2011 Serial 1865  
Permanent link to this record
 

 
Author Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin edit   pdf
url  doi
isbn  openurl
Title Towards Automatic Concept Transfer Type (down) Conference Article
Year 2011 Publication Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering Abbreviated Journal  
Volume Issue Pages 167.176  
Keywords chromatic modeling, color concepts, color transfer, concept transfer  
Abstract This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.  
Address  
Corporate Author Thesis  
Publisher ACM Press Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 978-1-4503-0907-3 Medium  
Area Expedition Conference NPAR  
Notes CIC Approved no  
Call Number Admin @ si @ MSM2011 Serial 1866  
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell edit  url
openurl 
Title Categorical Focal Colours are Structurally Invariant Under Illuminant Changes Type (down) Conference Article
Year 2011 Publication European Conference on Visual Perception Abbreviated Journal  
Volume Issue Pages 196  
Keywords  
Abstract The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Perception 40 Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ECVP  
Notes CIC Approved no  
Call Number Admin @ si @ RPV2011 Serial 1867  
Permanent link to this record
 

 
Author Joost Van de Weijer; Fahad Shahbaz Khan edit   pdf
doi  isbn
openurl 
Title Fusing Color and Shape for Bag-of-Words Based Object Recognition Type (down) Conference Article
Year 2013 Publication 4th Computational Color Imaging Workshop Abbreviated Journal  
Volume 7786 Issue Pages 25-34  
Keywords Object Recognition; color features; bag-of-words; image classification  
Abstract In this article we provide an analysis of existing methods for the incorporation of color in bag-of-words based image representations. We propose a list of desired properties on which bases fusing methods can be compared. We discuss existing methods and indicate shortcomings of the two well-known fusing methods, namely early and late fusion. Several recent works have addressed these shortcomings by exploiting top-down information in the bag-of-words pipeline: color attention which is motivated from human vision, and Portmanteau vocabularies which are based on information theoretic compression of product vocabularies. We point out several remaining challenges in cue fusion and provide directions for future research.  
Address Chiba; Japan; March 2013  
Corporate Author Thesis  
Publisher Springer Berlin Heidelberg Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0302-9743 ISBN 978-3-642-36699-4 Medium  
Area Expedition Conference CCIW  
Notes CIC; 600.048 Approved no  
Call Number Admin @ si @ WeK2013 Serial 2283  
Permanent link to this record
 

 
Author Naila Murray; Luca Marchesotti; Florent Perronnin edit   pdf
url  doi
isbn  openurl
Title AVA: A Large-Scale Database for Aesthetic Visual Analysis Type (down) Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 2408-2415  
Keywords  
Abstract With the ever-expanding volume of visual content available, the ability to organize and navigate such content by aesthetic preference is becoming increasingly important. While still in its nascent stage, research into computational models of aesthetic preference already shows great potential. However, to advance research, realistic, diverse and challenging databases are needed. To this end, we introduce a new large-scale database for conducting Aesthetic Visual Analysis: AVA. It contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style. We show the advantages of AVA with respect to existing databases in terms of scale, diversity, and heterogeneity of annotations. We then describe several key insights into aesthetic preference afforded by AVA. Finally, we demonstrate, through three applications, how the large scale of AVA can be leveraged to improve performance on existing preference tasks  
Address Providence, Rhode Islan  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ MMP2012a Serial 2025  
Permanent link to this record
 

 
Author Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell edit   pdf
url  doi
isbn  openurl
Title Names and Shades of Color for Intrinsic Image Estimation Type (down) Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 278-285  
Keywords  
Abstract In the last years, intrinsic image decomposition has gained attention. Most of the state-of-the-art methods are based on the assumption that reflectance changes come along with strong image edges. Recently, user intervention in the recovery problem has proved to be a remarkable source of improvement. In this paper, we propose a novel approach that aims to overcome the shortcomings of pure edge-based methods by introducing strong surface descriptors, such as the color-name descriptor which introduces high-level considerations resembling top-down intervention. We also use a second surface descriptor, termed color-shade, which allows us to include physical considerations derived from the image formation model capturing gradual color surface variations. Both color cues are combined by means of a Markov Random Field. The method is quantitatively tested on the MIT ground truth dataset using different error metrics, achieving state-of-the-art performance.  
Address Providence, Rhode Island  
Corporate Author Thesis  
Publisher IEEE Xplore Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
Area Expedition Conference CVPR  
Notes CIC Approved no  
Call Number Admin @ si @ SPB2012 Serial 2026  
Permanent link to this record
 

 
Author Naila Murray; Luca Marchesotti; Florent Perronnin edit   pdf
url  doi
isbn  openurl
Title Learning to Rank Images using Semantic and Aesthetic Labels Type (down) Conference Article
Year 2012 Publication 23rd British Machine Vision Conference Abbreviated Journal  
Volume Issue Pages 110.1-110.10  
Keywords  
Abstract Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics/visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recently-published large-scale dataset and we show that the second family of techniques significantly outperforms the first one.  
Address Guildford, London  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 1-901725-46-4 Medium  
Area Expedition Conference BMVC  
Notes CIC Approved no  
Call Number Admin @ si @ MMP2012b Serial 2027  
Permanent link to this record
 

 
Author Rahat Khan; Joost Van de Weijer; Dimosthenis Karatzas; Damien Muselet edit   pdf
doi  openurl
Title Towards multispectral data acquisition with hand-held devices Type (down) Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal  
Volume Issue Pages 2053 - 2057  
Keywords Multispectral; mobile devices; color measurements  
Abstract We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral
reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases
the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic.
 
Address Melbourne; Australia; September 2013  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ICIP  
Notes CIC; DAG; 600.048 Approved no  
Call Number Admin @ si @ KWK2013b Serial 2265  
Permanent link to this record
 

 
Author Shida Beigpour; Marc Serra; Joost Van de Weijer; Robert Benavente; Maria Vanrell; Olivier Penacchio; Dimitris Samaras edit   pdf
doi  openurl
Title Intrinsic Image Evaluation On Synthetic Complex Scenes Type (down) Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal  
Volume Issue Pages 285 - 289  
Keywords  
Abstract Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes.
 
Address Melbourne; Australia; September 2013  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ICIP  
Notes CIC; 600.048; 600.052; 600.051 Approved no  
Call Number Admin @ si @ BSW2013 Serial 2264  
Permanent link to this record