toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Bogdan Raducanu; Fadi Dornaika edit   pdf
doi  isbn
openurl 
  Title Appearance-based Face Recognition Using A Supervised Manifold Learning Framework Type Conference Article
  Year 2012 Publication (down) IEEE Workshop on the Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 465-470  
  Keywords  
  Abstract Many natural image sets, depicting objects whose appearance is changing due to motion, pose or light variations, can be considered samples of a low-dimension nonlinear manifold embedded in the high-dimensional observation space (the space of all possible images). The main contribution of our work is represented by a Supervised Laplacian Eigemaps (S-LE) algorithm, which exploits the class label information for mapping the original data in the embedded space. Our proposed approach benefits from two important properties: i) it is discriminative, and ii) it adaptively selects the neighbors of a sample without using any predefined neighborhood size. Experiments were conducted on four face databases and the results demonstrate that the proposed algorithm significantly outperforms many linear and non-linear embedding techniques. Although we've focused on the face recognition problem, the proposed approach could also be extended to other category of objects characterized by large variance in their appearance.  
  Address Breckenridge; CO; USA  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5790 ISBN 978-1-4673-0233-3 Medium  
  Area Expedition Conference WACV  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ RaD2012d Serial 1890  
Permanent link to this record
 

 
Author German Ros; Angel Sappa; Daniel Ponsa; Antonio Lopez edit   pdf
openurl 
  Title Visual SLAM for Driverless Cars: A Brief Survey Type Conference Article
  Year 2012 Publication (down) IEEE Workshop on Navigation, Perception, Accurate Positioning and Mapping for Intelligent Vehicles Abbreviated Journal  
  Volume Issue Pages  
  Keywords SLAM  
  Abstract  
  Address Alcalá de Henares  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IVW  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RSP2012; ADAS @ adas Serial 2019  
Permanent link to this record
 

 
Author Jose Carlos Rubio; Joan Serrat; Antonio Lopez; Daniel Ponsa edit   pdf
url  doi
openurl 
  Title Multiple target tracking for intelligent headlights control Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 13 Issue 2 Pages 594-605  
  Keywords Intelligent Headlights  
  Abstract Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RLP2012; ADAS @ adas @ rsl2012g Serial 1877  
Permanent link to this record
 

 
Author Santiago Segui; Michal Drozdzal; Fernando Vilariño; Carolina Malagelada; Fernando Azpiroz; Petia Radeva; Jordi Vitria edit   pdf
doi  openurl
  Title Categorization and Segmentation of Intestinal Content Frames for Wireless Capsule Endoscopy Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal TITB  
  Volume 16 Issue 6 Pages 1341-1352  
  Keywords  
  Abstract Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content – clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1089-7771 ISBN Medium  
  Area 800 Expedition Conference  
  Notes MILAB; MV; OR;SIAI Approved no  
  Call Number Admin @ si @ SDV2012 Serial 2124  
Permanent link to this record
 

 
Author Antonio Hernandez; Carlo Gatta; Sergio Escalera; Laura Igual; Victoria Martin-Yuste; Manel Sabate; Petia Radeva edit   pdf
doi  openurl
  Title Accurate coronary centerline extraction, caliber estimation and catheter detection in angiographies Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal TITB  
  Volume 16 Issue 6 Pages 1332-1340  
  Keywords  
  Abstract Segmentation of coronary arteries in X-Ray angiography is a fundamental tool to evaluate arterial diseases and choose proper coronary treatment. The accurate segmentation of coronary arteries has become an important topic for the registration of different modalities which allows physicians rapid access to different medical imaging information from Computed Tomography (CT) scans or Magnetic Resonance Imaging (MRI). In this paper, we propose an accurate fully automatic algorithm based on Graph-cuts for vessel centerline extraction, caliber estimation, and catheter detection. Vesselness, geodesic paths, and a new multi-scale edgeness map are combined to customize the Graph-cuts approach to the segmentation of tubular structures, by means of a global optimization of the Graph-cuts energy function. Moreover, a novel supervised learning methodology that integrates local and contextual information is proposed for automatic catheter detection. We evaluate the method performance on three datasets coming from different imaging systems. The method performs as good as the expert observer w.r.t. centerline detection and caliber estimation. Moreover, the method discriminates between arteries and catheter with an accuracy of 96.5%, sensitivity of 72%, and precision of 97.4%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1089-7771 ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA Approved no  
  Call Number Admin @ si @ HGE2012 Serial 2141  
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa edit   pdf
doi  openurl
  Title Implicit Polynomial Representation through a Fast Fitting Error Estimation Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 4 Pages 2089-2098  
  Keywords  
  Abstract Impact Factor
This paper presents a simple distance estimation for implicit polynomial fitting. It is computed as the height of a simplex built between the point and the surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of the coefficients of the implicit polynomial. Moreover, it is differentiable and has a smooth behavior . Hence, it can be used in any gradient-based optimization. In this paper, its use in a Levenberg-Marquardt framework is shown, which is particularly devoted for nonlinear least squares problems. The proposed estimation is a generalization of the gradient-based distance estimation, which is widely used in the literature. Experimental results, both in 2-D and 3-D data sets, are provided. Comparisons with state-of-the-art techniques are presented, showing the advantages of the proposed approach.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RoS2012b; ADAS @ adas @ Serial 1937  
Permanent link to this record
 

 
Author J. Stöttinger; A. Hanbury; N. Sebe; Theo Gevers edit  doi
openurl 
  Title Spars Color Interest Points for Image Retrieval and Object Categorization Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 5 Pages 2681-2692  
  Keywords  
  Abstract Impact factor 2010: 2.92
IF 2011/2012?: 3.32
Interest point detection is an important research area in the field of image processing and computer vision. In particular, image retrieval and object categorization heavily rely on interest point detection from which local image descriptors are computed for image matching. In general, interest points are based on luminance, and color has been largely ignored. However, the use of color increases the distinctiveness of interest points. The use of color may therefore provide selective search reducing the total number of interest points used for image matching. This paper proposes color interest points for sparse image representation. To reduce the sensitivity to varying imaging conditions, light-invariant interest points are introduced. Color statistics based on occurrence probability lead to color boosted points, which are obtained through saliency-based feature selection. Furthermore, a principal component analysis-based scale selection method is proposed, which gives a robust scale estimation per interest point. From large-scale experiments, it is shown that the proposed color interest point detector has higher repeatability than a luminance-based one. Furthermore, in the context of image retrieval, a reduced and predictable number of color features show an increase in performance compared to state-of-the-art interest points. Finally, in the context of object recognition, for the Pascal VOC 2007 challenge, our method gives comparable performance to state-of-the-art methods using only a small fraction of the features, reducing the computing time considerably.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ SHS2012 Serial 1847  
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers edit  doi
openurl 
  Title Combining Head Pose and Eye Location Information for Gaze Estimation Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 2 Pages 802-815  
  Keywords  
  Abstract Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012b Serial 1851  
Permanent link to this record
 

 
Author Arjan Gijsenij; R. Lu; Theo Gevers; De Xu edit  doi
openurl 
  Title Color Constancy for Multiple Light Source Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 2 Pages 697-707  
  Keywords  
  Abstract Impact factor 2010: 2.92
Impact factor 2011/2012?: 3.32
Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ GLG2012a Serial 1852  
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers edit  doi
openurl 
  Title A Statistical Method for 2D Facial Landmarking Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 2 Pages 844-858  
  Keywords  
  Abstract IF = 3.32
Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE).
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ DSG 2012 Serial 1853  
Permanent link to this record
 

 
Author Javier Vazquez; Maria Vanrell; Ramon Baldrich; Francesc Tous edit  url
doi  openurl
  Title Color Constancy by Category Correlation Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 4 Pages 1997-2007  
  Keywords  
  Abstract Finding color representations which are stable to illuminant changes is still an open problem in computer vision. Until now most approaches have been based on physical constraints or statistical assumptions derived from the scene, while very little attention has been paid to the effects that selected illuminants have
on the final color image representation. The novelty of this work is to propose
perceptual constraints that are computed on the corrected images. We define the
category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here we choose these colors as the universal color categories related to basic linguistic terms which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number Admin @ si @ VVB2012 Serial 1999  
Permanent link to this record
 

 
Author Marina Alberti; Simone Balocco; Carlo Gatta; Francesco Ciompi; Oriol Pujol; Joana Silva; Xavier Carrillo; Petia Radeva edit  url
doi  openurl
  Title Automatic Bifurcation Detection in Coronary IVUS Sequences Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Biomedical Engineering Abbreviated Journal TBME  
  Volume 59 Issue 4 Pages 1022-2031  
  Keywords  
  Abstract In this paper, we present a fully automatic method which identifies every bifurcation in an intravascular ultrasound (IVUS) sequence, the corresponding frames, the angular orientation with respect to the IVUS acquisition, and the extension. This goal is reached using a two-level classification scheme: first, a classifier is applied to a set of textural features extracted from each image of a sequence. A comparison among three state-of-the-art discriminative classifiers (AdaBoost, random forest, and support vector machine) is performed to identify the most suitable method for the branching detection task. Second, the results are improved by exploiting contextual information using a multiscale stacked sequential learning scheme. The results are then successively refined using a-priori information about branching dimensions and geometry. The proposed approach provides a robust tool for the quick review of pullback sequences, facilitating the evaluation of the lesion at bifurcation sites. The proposed method reaches an F-Measure score of 86.35%, while the F-Measure scores for inter- and intraobserver variability are 71.63% and 76.18%, respectively. The obtained results are positive. Especially, considering the branching detection task is very challenging, due to high variability in bifurcation dimensions and appearance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0018-9294 ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA Approved no  
  Call Number Admin @ si @ ABG2012 Serial 1996  
Permanent link to this record
 

 
Author Yunchao Gong; Svetlana Lazebnik; Albert Gordo; Florent Perronnin edit   pdf
doi  isbn
openurl 
  Title Iterative quantization: A procrustean approach to learning binary codes for Large-Scale Image Retrieval Type Journal Article
  Year 2012 Publication (down) IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 35 Issue 12 Pages 2916-2929  
  Keywords  
  Abstract This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or “classemes” on the ImageNet dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN 978-1-4577-0394-2 Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ GLG 2012b Serial 2008  
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers edit  doi
openurl 
  Title Accurate Eye Center Location through Invariant Isocentric Patterns Type Journal Article
  Year 2012 Publication (down) IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 34 Issue 9 Pages 1785-1798  
  Keywords  
  Abstract Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012a Serial 1849  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit   pdf
url  doi
openurl 
  Title Improving Color Constancy by Photometric Edge Weighting Type Journal Article
  Year 2012 Publication (down) IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 34 Issue 5 Pages 918-929  
  Keywords  
  Abstract : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.  
  Address Los Alamitos; CA; USA;  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes CIC;ISE Approved no  
  Call Number Admin @ si @ GGW2012 Serial 1850  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: