|   | 
Details
   web
Records Links
Author Naila Murray edit  openurl
Title Predicting Saliency and Aesthetics in Images: A Bottom-up Perspective Type Book Whole
Year 2012 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract In Part 1 of the thesis, we hypothesize that salient and non-salient image regions can be estimated to be the regions which are enhanced or assimilated in standard low-level color image representations. We prove this hypothesis by adapting a low-level model of color perception into a saliency estimation model. This model shares the three main steps found in many successful models for predicting attention in a scene: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. For such models, integrating spatial information and justifying the choice of various parameter values remain open problems. Our saliency model inherits a principled selection of parameters as well as an innate spatial pooling mechanism from the perception model on which it is based. This pooling mechanism has been fitted using psychophysical data acquired in color-luminance setting experiments. The proposed model outperforms the state-of-the-art at the task of predicting eye-fixations from two datasets. After demonstrating the effectiveness of our basic saliency model, we introduce an improved image representation, based on geometrical grouplets, that enhances complex low-level visual features such as corners and terminations, and suppresses relatively simpler features such as edges. With this improved image representation, the performance of our saliency model in predicting eye-fixations increases for both datasets.

In Part 2 of the thesis, we investigate the problem of aesthetic visual analysis. While a great deal of research has been conducted on hand-crafting image descriptors for aesthetics, little attention so far has been dedicated to the collection, annotation and distribution of ground truth data. Because image aesthetics is complex and subjective, existing datasets, which have few images and few annotations, have significant limitations. To address these limitations, we have introduced a new large-scale database for conducting Aesthetic Visual Analysis, which we call AVA. AVA contains more than 250,000 images, along with a rich variety of annotations. We investigate how the wealth of data in AVA can be used to tackle the challenge of understanding and assessing visual aesthetics by looking into several problems relevant for aesthetic analysis. We demonstrate that by leveraging the data in AVA, and using generic low-level features such as SIFT and color histograms, we can exceed state-of-the-art performance in aesthetic quality prediction tasks.

Finally, we entertain the hypothesis that low-level visual information in our saliency model can also be used to predict visual aesthetics by capturing local image characteristics such as feature contrast, grouping and isolation, characteristics thought to be related to universal aesthetic laws. We use the weighted center-surround responses that form the basis of our saliency model to create a feature vector that describes aesthetics. We also introduce a novel color space for fine-grained color representation. We then demonstrate that the resultant features achieve state-of-the-art performance on aesthetic quality classification.

As such, a promising contribution of this thesis is to show that several vision experiences – low-level color perception, visual saliency and visual aesthetics estimation – may be successfully modeled using a unified framework. This suggests a similar architecture in area V1 for both color perception and saliency and adds evidence to the hypothesis that visual aesthetics appreciation is driven in part by low-level cues.
 
Address  
Corporate Author Thesis Ph.D. thesis  
Publisher Ediciones Graficas Rey Place of Publication Editor Xavier Otazu;Maria Vanrell  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Mur2012 Serial 2212  
Permanent link to this record
 

 
Author David Augusto Rojas edit  openurl
Title Colouring Local Feature Detection for Matching Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal  
Volume 133 Issue Pages  
Keywords  
Abstract  
Address  
Corporate Author Computer Vision Center Thesis Master's thesis  
Publisher Place of Publication Bellaterra, Barcelona Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Roj2009 Serial 2392  
Permanent link to this record
 

 
Author Olivier Penacchio edit  openurl
Title Relative Density of L, M, S photoreceptors in the Human Retina Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal  
Volume 135 Issue Pages  
Keywords  
Abstract  
Address  
Corporate Author Computer Vision Center Thesis Master's thesis  
Publisher Place of Publication Bellaterra, Barcelona Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Pen2009 Serial 2394  
Permanent link to this record
 

 
Author Xavier Boix edit  openurl
Title Learning Conditional Random Fields for Stereo Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal  
Volume 136 Issue Pages  
Keywords  
Abstract  
Address  
Corporate Author Computer Vision Center Thesis Master's thesis  
Publisher Place of Publication Bellaterra, Barcelona Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Boi2009 Serial 2395  
Permanent link to this record
 

 
Author Shida Beigpour edit  openurl
Title Physics-based Reflectance Estimation Applied to Recoloring Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal  
Volume 137 Issue Pages  
Keywords  
Abstract  
Address  
Corporate Author Computer Vision Center Thesis Master's thesis  
Publisher Place of Publication Bellaterra, Barcelona Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Bei2009 Serial 2396  
Permanent link to this record
 

 
Author Jose Carlos Rubio edit  openurl
Title Graph matching based on graphical models with application to vehicle tracking and classification at night Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal  
Volume 144 Issue Pages  
Keywords  
Abstract  
Address  
Corporate Author Computer Vision Center Thesis Master's thesis  
Publisher Place of Publication Bellaterra, Barcelona Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Rub2009 Serial 2398  
Permanent link to this record
 

 
Author Ivet Rafegas edit  openurl
Title Exploring Low-Level Vision Models. Case Study: Saliency Prediction Type Report
Year 2013 Publication CVC Technical Report Abbreviated Journal  
Volume 175 Issue Pages  
Keywords  
Abstract  
Address  
Corporate Author Thesis Master's thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC Approved no  
Call Number Admin @ si @ Raf2013 Serial 2409  
Permanent link to this record
 

 
Author A. Ruiz; Joost Van de Weijer; Xavier Binefa edit   pdf
url  openurl
Title Regularized Multi-Concept MIL for weakly-supervised facial behavior categorization Type Conference Article
Year 2014 Publication 25th British Machine Vision Conference Abbreviated Journal  
Volume Issue Pages  
Keywords  
Abstract We address the problem of estimating high-level semantic labels for videos of recorded people by means of analysing their facial expressions. This problem, to which we refer as facial behavior categorization, is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. Therefore, the goal is to learn a set of discriminative expressions and how they determine the video weak-labels. Facial behavior categorization can be posed as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. In contrast to previous approaches applied in facial behavior analysis, RMC-MIL follows a Multi-Concept assumption which allows different facial expressions (concepts) to contribute differently to the video-label. Moreover, to handle with the high-dimensional nature of facial-descriptors, RMC-MIL uses a discriminative approach to model the concepts and structured sparsity regularization to discard non-informative features. RMC-MIL is posed as a convex-constrained optimization problem where all the parameters are jointly learned using the Projected-Quasi-Newton method. In our experiments, we use two public data-sets to show the advantages of the Regularized Multi-Concept approach and its improvement compared to existing MIL methods. RMC-MIL outperforms state-of-the-art results in the UNBC data-set for pain detection.  
Address Nottingham; UK; September 2014  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference BMVC  
Notes LAMP; CIC; 600.074; 600.079 Approved no  
Call Number Admin @ si @ RWB2014 Serial 2508  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Michael Felsberg edit   pdf
doi  openurl
Title Scale Coding Bag-of-Words for Action Recognition Type Conference Article
Year 2014 Publication 22nd International Conference on Pattern Recognition Abbreviated Journal  
Volume Issue Pages 1514-1519  
Keywords  
Abstract Recognizing human actions in still images is a challenging problem in computer vision due to significant amount of scale, illumination and pose variation. Given the bounding box of a person both at training and test time, the task is to classify the action associated with each bounding box in an image.
Most state-of-the-art methods use the bag-of-words paradigm for action recognition. The bag-of-words framework employing a dense multi-scale grid sampling strategy is the de facto standard for feature detection. This results in a scale invariant image representation where all the features at multiple-scales are binned in a single histogram. We argue that such a scale invariant
strategy is sub-optimal since it ignores the multi-scale information
available with each bounding box of a person.
This paper investigates alternative approaches to scale coding for action recognition in still images. We encode multi-scale information explicitly in three different histograms for small, medium and large scale visual-words. Our first approach exploits multi-scale information with respect to the image size. In our second approach, we encode multi-scale information relative to the size of the bounding box of a person instance. In each approach, the multi-scale histograms are then concatenated into a single representation for action classification. We validate our approaches on the Willow dataset which contains seven action categories: interacting with computer, photography, playing music,
riding bike, riding horse, running and walking. Our results clearly suggest that the proposed scale coding approaches outperform the conventional scale invariant technique. Moreover, we show that our approach obtains promising results compared to more complex state-of-the-art methods.
 
Address Stockholm; August 2014  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ICPR  
Notes CIC; LAMP; 601.240; 600.074; 600.079 Approved no  
Call Number Admin @ si @ KWB2014 Serial 2450  
Permanent link to this record
 

 
Author Shida Beigpour; Christian Riess; Joost Van de Weijer; Elli Angelopoulou edit   pdf
doi  openurl
Title Multi-Illuminant Estimation with Conditional Random Fields Type Journal Article
Year 2014 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
Volume 23 Issue 1 Pages 83-95  
Keywords color constancy; CRF; multi-illuminant  
Abstract Most existing color constancy algorithms assume uniform illumination. However, in real-world scenes, this is not often the case. Thus, we propose a novel framework for estimating the colors of multiple illuminants and their spatial distribution in the scene. We formulate this problem as an energy minimization task within a conditional random field over a set of local illuminant estimates. In order to quantitatively evaluate the proposed method, we created a novel data set of two-dominant-illuminant images comprised of laboratory, indoor, and outdoor scenes. Unlike prior work, our database includes accurate pixel-wise ground truth illuminant information. The performance of our method is evaluated on multiple data sets. Experimental results show that our framework clearly outperforms single illuminant estimators as well as a recently proposed multi-illuminant estimation approach.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1057-7149 ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 600.074; 600.079 Approved no  
Call Number Admin @ si @ BRW2014 Serial 2451  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Muhammad Anwer Rao; Michael Felsberg; Carlo Gatta edit   pdf
doi  openurl
Title Semantic Pyramids for Gender and Action Recognition Type Journal Article
Year 2014 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
Volume 23 Issue 8 Pages 3633-3645  
Keywords  
Abstract Person description is a challenging problem in computer vision. We investigated two major aspects of person description: 1) gender and 2) action recognition in still images. Most state-of-the-art approaches for gender and action recognition rely on the description of a single body part, such as face or full-body. However, relying on a single body part is suboptimal due to significant variations in scale, viewpoint, and pose in real-world images. This paper proposes a semantic pyramid approach for pose normalization. Our approach is fully automatic and based on combining information from full-body, upper-body, and face regions for gender and action recognition in still images. The proposed approach does not require any annotations for upper-body and face of a person. Instead, we rely on pretrained state-of-the-art upper-body and face detectors to automatically extract semantic information of a person. Given multiple bounding boxes from each body part detector, we then propose a simple method to select the best candidate bounding box, which is used for feature extraction. Finally, the extracted features from the full-body, upper-body, and face regions are combined into a single representation for classification. To validate the proposed approach for gender recognition, experiments are performed on three large data sets namely: 1) human attribute; 2) head-shoulder; and 3) proxemics. For action recognition, we perform experiments on four data sets most used for benchmarking action recognition in still images: 1) Sports; 2) Willow; 3) PASCAL VOC 2010; and 4) Stanford-40. Our experiments clearly demonstrate that the proposed approach, despite its simplicity, outperforms state-of-the-art methods for gender and action recognition.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 1057-7149 ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 601.160; 600.074; 600.079;MILAB Approved no  
Call Number Admin @ si @ KWR2014 Serial 2507  
Permanent link to this record
 

 
Author Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell; Dimitris Samaras edit   pdf
doi  openurl
Title The Photometry of Intrinsic Images Type Conference Article
Year 2014 Publication 27th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 1494-1501  
Keywords  
Abstract Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images.  
Address Columbus; Ohio; USA; June 2014  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CVPR  
Notes CIC; 600.052; 600.051; 600.074 Approved no  
Call Number Admin @ si @ SPB2014 Serial 2506  
Permanent link to this record
 

 
Author M. Danelljan; Fahad Shahbaz Khan; Michael Felsberg; Joost Van de Weijer edit   pdf
doi  openurl
Title Adaptive color attributes for real-time visual tracking Type Conference Article
Year 2014 Publication 27th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
Volume Issue Pages 1090 - 1097  
Keywords  
Abstract Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object
recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally
efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.
This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional
variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms
state-of-the-art tracking methods while running at more than 100 frames per second.
 
Address Nottingham; UK; September 2014  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference CVPR  
Notes CIC; LAMP; 600.074; 600.079 Approved no  
Call Number Admin @ si @ DKF2014 Serial 2509  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Shida Beigpour; Joost Van de Weijer; Michael Felsberg edit  doi
openurl 
Title Painting-91: A Large Scale Database for Computational Painting Categorization Type Journal Article
Year 2014 Publication Machine Vision and Applications Abbreviated Journal MVAP  
Volume 25 Issue 6 Pages 1385-1397  
Keywords  
Abstract Computer analysis of visual art, especially paintings, is an interesting cross-disciplinary research domain. Most of the research in the analysis of paintings involve medium to small range datasets with own specific settings. Interestingly, significant progress has been made in the field of object and scene recognition lately. A key factor in this success is the introduction and availability of benchmark datasets for evaluation. Surprisingly, such a benchmark setup is still missing in the area of computational painting categorization. In this work, we propose a novel large scale dataset of digital paintings. The dataset consists of paintings from 91 different painters. We further show three applications of our dataset namely: artist categorization, style classification and saliency detection. We investigate how local and global features popular in image classification perform for the tasks of artist and style categorization. For both categorization tasks, our experimental results suggest that combining multiple features significantly improves the final performance. We show that state-of-the-art computer vision methods can correctly classify 50 % of unseen paintings to its painter in a large dataset and correctly attribute its artistic style in over 60 % of the cases. Additionally, we explore the task of saliency detection on paintings and show experimental findings using state-of-the-art saliency estimation algorithms.  
Address  
Corporate Author Thesis  
Publisher Springer Berlin Heidelberg Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN 0932-8092 ISBN Medium  
Area Expedition Conference  
Notes CIC; LAMP; 600.074; 600.079 Approved no  
Call Number Admin @ si @ KBW2014 Serial 2510  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Dimosthenis Karatzas; Sophie Wuerger edit   pdf
url  doi
openurl 
Title Limitations of visual gamma corrections in LCD displays Type Journal Article
Year 2014 Publication Displays Abbreviated Journal Dis  
Volume 35 Issue 5 Pages 227–239  
Keywords Display calibration; Psychophysics; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration  
Abstract A method for estimating the non-linear gamma transfer function of liquid–crystal displays (LCDs) without the need of a photometric measurement device was described by Xiao et al. (2011) [1]. It relies on observer’s judgments of visual luminance by presenting eight half-tone patterns with luminances from 1/9 to 8/9 of the maximum value of each colour channel. These half-tone patterns were distributed over the screen both over the vertical and horizontal viewing axes. We conducted a series of photometric and psychophysical measurements (consisting in the simultaneous presentation of half-tone patterns in each trial) to evaluate whether the angular dependency of the light generated by three different LCD technologies would bias the results of these gamma transfer function estimations. Our results show that there are significant differences between the gamma transfer functions measured and produced by observers at different viewing angles. We suggest appropriate modifications to the Xiao et al. paradigm to counterbalance these artefacts which also have the advantage of shortening the amount of time spent in collecting the psychophysical measurements.  
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; DAG; 600.052; 600.077; 600.074 Approved no  
Call Number Admin @ si @ PRK2014 Serial 2511  
Permanent link to this record
 

 
Author C. Alejandro Parraga edit  doi
isbn  openurl
Title Color Vision, Computational Methods for Type Book Chapter
Year 2014 Publication Encyclopedia of Computational Neuroscience Abbreviated Journal  
Volume Issue Pages 1-11  
Keywords Color computational vision; Computational neuroscience of color  
Abstract The study of color vision has been aided by a whole battery of computational methods that attempt to describe the mechanisms that lead to our perception of colors in terms of the information-processing properties of the visual system. Their scope is highly interdisciplinary, linking apparently dissimilar disciplines such as mathematics, physics, computer science, neuroscience, cognitive science, and psychology. Since the sensation of color is a feature of our brains, computational approaches usually include biological features of neural systems in their descriptions, from retinal light-receptor interaction to subcortical color opponency, cortical signal decoding, and color categorization. They produce hypotheses that are usually tested by behavioral or psychophysical experiments.  
Address  
Corporate Author Thesis  
Publisher Springer-Verlag Berlin Heidelberg Place of Publication Editor Dieter Jaeger; Ranu Jung  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN 978-1-4614-7320-6 Medium  
Area Expedition Conference  
Notes CIC; 600.074 Approved no  
Call Number Admin @ si @ Par2014 Serial 2512  
Permanent link to this record
 

 
Author Xim Cerda-Company; C. Alejandro Parraga; Xavier Otazu edit  openurl
Title Which tone-mapping is the best? A comparative study of tone-mapping perceived quality Type Abstract
Year 2014 Publication Perception Abbreviated Journal  
Volume 43 Issue Pages 106  
Keywords  
Abstract Perception 43 ECVP Abstract Supplement
High-dynamic-range (HDR) imaging refers to the methods designed to increase the brightness dynamic range present in standard digital imaging techniques. This increase is achieved by taking the same picture under di erent exposure values and mapping the intensity levels into a single image by way of a tone-mapping operator (TMO). Currently, there is no agreement on how to evaluate the quality
of di erent TMOs. In this work we psychophysically evaluate 15 di erent TMOs obtaining rankings based on the perceived properties of the resulting tone-mapped images. We performed two di erent experiments on a CRT calibrated display using 10 subjects: (1) a study of the internal relationships between grey-levels and (2) a pairwise comparison of the resulting 15 tone-mapped images. In (1) observers internally matched the grey-levels to a reference inside the tone-mapped images and in the real scene. In (2) observers performed a pairwise comparison of the tone-mapped images alongside the real scene. We obtained two rankings of the TMOs according their performance. In (1) the best algorithm
was ICAM by J.Kuang et al (2007) and in (2) the best algorithm was a TMO by Krawczyk et al (2005). Our results also show no correlation between these two rankings.
 
Address  
Corporate Author Thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference ECVP  
Notes CIC; NEUROBIT; 600.074 Approved no  
Call Number Admin @ si @ CPO2014 Serial 2527  
Permanent link to this record
 

 
Author Ricard Balague edit  openurl
Title Exploring the combination of color cues for intrinsic image decomposition Type Report
Year 2014 Publication CVC Technical Report Abbreviated Journal  
Volume 178 Issue Pages  
Keywords  
Abstract Intrinsic image decomposition is a challenging problem that consists in separating an image into its physical characteristics: reflectance and shading. This problem can be solved in different ways, but most methods have combined information from several visual cues. In this work we describe an extension of an existing method proposed by Serra et al. which considers two color descriptors and combines them by means of a Markov Random Field. We analyze in depth the weak points of the method and we explore more possibilities to use in both descriptors. The proposed extension depends on the combination of the cues considered to overcome some of the limitations of the original method. Our approach is tested on the MIT dataset and Beigpour et al. dataset, which contain images of real objects acquired under controlled conditions and synthetic images respectively, with their corresponding ground truth.  
Address UAB; September 2014  
Corporate Author Thesis Master's thesis  
Publisher Place of Publication Editor  
Language Summary Language Original Title  
Series Editor (up) Series Title Abbreviated Series Title  
Series Volume Series Issue Edition  
ISSN ISBN Medium  
Area Expedition Conference  
Notes CIC; 600.074 Approved no  
Call Number Admin @ si @ Bal2014 Serial 2579  
Permanent link to this record