toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Koen E.A. van de Sande; Theo Gevers; C.G.M. Snoek edit  doi
openurl 
  Title Evaluating Color Descriptors for Object and Scene Recognition Type Journal Article
  Year 2010 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 32 Issue 9 Pages 1582 - 1596  
  Keywords  
  Abstract Impact factor: 5.308
Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ SGS2010 Serial 1846  
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers edit  doi
openurl 
  Title Accurate Eye Center Location through Invariant Isocentric Patterns Type Journal Article
  Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 34 Issue 9 Pages 1785-1798  
  Keywords  
  Abstract Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012a Serial 1849  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit   pdf
url  doi
openurl 
  Title Improving Color Constancy by Photometric Edge Weighting Type Journal Article
  Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 34 Issue 5 Pages 918-929  
  Keywords  
  Abstract : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.  
  Address Los Alamitos; CA; USA;  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes CIC;ISE Approved no  
  Call Number Admin @ si @ GGW2012 Serial 1850  
Permanent link to this record
 

 
Author David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo edit   pdf
doi  openurl
  Title Virtual and Real World Adaptation for Pedestrian Detection Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 4 Pages 797-809  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number ADAS @ adas @ VML2014 Serial 2275  
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga edit   pdf
doi  openurl
  Title Low-level SpatioChromatic Grouping for Saliency Estimation Type Journal Article
  Year 2013 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 35 Issue 11 Pages 2810-2816  
  Keywords  
  Abstract We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-level spatiochromatic model that has successfully predicted chromatic induction phenomena. In so doing, we hypothesize that the low-level visual mechanisms that enhance or suppress image detail are also responsible for making some image regions more salient. Moreover, SIM adds geometrical grouplets to enhance complex low-level features such as corners, and suppress relatively simpler features such as edges. Since our model has been fitted on psychophysical chromatic induction data, it is largely nonparametric. SIM outperforms state-of-the-art methods in predicting eye fixations on two datasets and using two metrics.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes CIC; 600.051; 600.052; 605.203 Approved no  
  Call Number Admin @ si @ MVO2013 Serial 2289  
Permanent link to this record
 

 
Author Yunchao Gong; Svetlana Lazebnik; Albert Gordo; Florent Perronnin edit   pdf
doi  isbn
openurl 
  Title Iterative quantization: A procrustean approach to learning binary codes for Large-Scale Image Retrieval Type Journal Article
  Year 2012 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 35 Issue 12 Pages 2916-2929  
  Keywords  
  Abstract This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or “classemes” on the ImageNet dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN 978-1-4577-0394-2 Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ GLG 2012b Serial 2008  
Permanent link to this record
 

 
Author Carlo Gatta; Francesco Ciompi edit   pdf
doi  openurl
  Title Stacked Sequential Scale-Space Taylor Context Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 8 Pages 1694-1700  
  Keywords  
  Abstract We analyze sequential image labeling methods that sample the posterior label field in order to gather contextual information. We propose an effective method that extracts local Taylor coefficients from the posterior at different scales. Results show that our proposal outperforms state-of-the-art methods on MSRC-21, CAMVID, eTRIMS8 and KAIST2 data sets.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; MILAB; 601.160; 600.079 Approved no  
  Call Number Admin @ si @ GaC2014 Serial 2466  
Permanent link to this record
 

 
Author Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny edit  doi
openurl 
  Title Word Spotting and Recognition with Embedded Attributes Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 12 Pages 2552 - 2566  
  Keywords  
  Abstract This article addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.056; 600.045; 600.061; 602.006; 600.077 Approved no  
  Call Number Admin @ si @ AGF2014a Serial 2483  
Permanent link to this record
 

 
Author Lorenzo Seidenari; Giuseppe Serra; Andrew Bagdanov; Alberto del Bimbo edit   pdf
doi  openurl
  Title Local pyramidal descriptors for image recognition Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 5 Pages 1033 - 1040  
  Keywords Object categorization; local features; kernel methods  
  Abstract In this paper we present a novel method to improve the flexibility of descriptor matching for image recognition by using local multiresolution
pyramids in feature space. We propose that image patches be represented at multiple levels of descriptor detail and that these levels be defined in terms of local spatial pooling resolution. Preserving multiple levels of detail in local descriptors is a way of hedging one’s bets on which levels will most relevant for matching during learning and recognition. We introduce the Pyramid SIFT (P-SIFT) descriptor and show that its use in four state-of-the-art image recognition pipelines improves accuracy and yields state-of-the-art results. Our technique is applicable independently of spatial pyramid matching and we show that spatial pyramids can be combined with local pyramids to obtain
further improvement.We achieve state-of-the-art results on Caltech-101
(80.1%) and Caltech-256 (52.6%) when compared to other approaches based on SIFT features over intensity images. Our technique is efficient and is extremely easy to integrate into image recognition pipelines.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.079 Approved no  
  Call Number Admin @ si @ SSB2014 Serial 2524  
Permanent link to this record
 

 
Author G. Lisanti; I. Masi; Andrew Bagdanov; Alberto del Bimbo edit  doi
openurl 
  Title Person Re-identification by Iterative Re-weighted Sparse Ranking Type Journal Article
  Year 2015 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 37 Issue 8 Pages 1629 - 1642  
  Keywords  
  Abstract In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 601.240; 600.079 Approved no  
  Call Number Admin @ si @ LMB2015 Serial 2557  
Permanent link to this record
 

 
Author Miguel Angel Bautista; Oriol Pujol; Fernando De la Torre; Sergio Escalera edit   pdf
url  doi
openurl 
  Title Error-Correcting Factorization Type Journal Article
  Year 2018 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 40 Issue Pages 2388-2401  
  Keywords  
  Abstract Error Correcting Output Codes (ECOC) is a successful technique in multi-class classification, which is a core problem in Pattern Recognition and Machine Learning. A major advantage of ECOC over other methods is that the multi- class problem is decoupled into a set of binary problems that are solved independently. However, literature defines a general error-correcting capability for ECOCs without analyzing how it distributes among classes, hindering a deeper analysis of pair-wise error-correction. To address these limitations this paper proposes an Error-Correcting Factorization (ECF) method, our contribution is three fold: (I) We propose a novel representation of the error-correction capability, called the design matrix, that enables us to build an ECOC on the basis of allocating correction to pairs of classes. (II) We derive the optimal code length of an ECOC using rank properties of the design matrix. (III) ECF is formulated as a discrete optimization problem, and a relaxed solution is found using an efficient constrained block coordinate descent approach. (IV) Enabled by the flexibility introduced with the design matrix we propose to allocate the error-correction on classes that are prone to confusion. Experimental results in several databases show that when allocating the error-correction to confusable classes ECF outperforms state-of-the-art approaches.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no menciona Approved no  
  Call Number Admin @ si @ BPT2018 Serial 3015  
Permanent link to this record
 

 
Author Enric Marti; Jordi Regincos;Jaime Lopez-Krahe; Juan J.Villanueva edit  url
doi  openurl
  Title Hand line drawing interpretation as three-dimensional objects Type Journal Article
  Year 1993 Publication Signal Processing – Intelligent systems for signal and image understanding Abbreviated Journal  
  Volume 32 Issue 1-2 Pages 91-110  
  Keywords Line drawing interpretation; line labelling; scene analysis; man-machine interaction; CAD input; line extraction  
  Abstract In this paper we present a technique to interpret hand line drawings as objects in a three-dimensional space. The object domain considered is based on planar surfaces with straight edges, concretely, on ansextension of Origami world to hidden lines. The line drawing represents the object under orthographic projection and it is sensed using a scanner. Our method is structured in two modules: feature extraction and feature interpretation. In the first one, image processing techniques are applied under certain tolerance margins to detect lines and junctions on the hand line drawing. Feature interpretation module is founded on line labelling techniques using a labelled junction dictionary. A labelling algorithm is here proposed. It uses relaxation techniques to reduce the number of incompatible labels with the junction dictionary so that the convergence of solutions can be accelerated. We formulate some labelling hypotheses tending to eliminate elements in two sets of labelled interpretations. That is, those which are compatible with the dictionary but do not correspond to three-dimensional objects and those which represent objects not very probable to be specified by means of a line drawing. New entities arise on the line drawing as a result of the extension of Origami world. These are defined to enunciate the assumptions of our method as well as to clarify the algorithms proposed. This technique is framed in a project aimed to implement a system to create 3D objects to improve man-machine interaction in CAD systems.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier North-Holland, Inc. Place of Publication Amsterdam, The Netherlands, The Netherlands Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0165-1684 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ISE; Approved no  
  Call Number IAM @ iam @ MRL1993 Serial 1611  
Permanent link to this record
 

 
Author Miquel Ferrer; Ernest Valveny; F. Serratosa edit  doi
openurl 
  Title Median graph: A new exact algorithm using a distance based on the maximum common subgraph Type Journal Article
  Year 2009 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 30 Issue 5 Pages 579–588  
  Keywords  
  Abstract Median graphs have been presented as a useful tool for capturing the essential information of a set of graphs. Nevertheless, computation of optimal solutions is a very hard problem. In this work we present a new and more efficient optimal algorithm for the median graph computation. With the use of a particular cost function that permits the definition of the graph edit distance in terms of the maximum common subgraph, and a prediction function in the backtracking algorithm, we reduce the size of the search space, avoiding the evaluation of a great amount of states and still obtaining the exact median. We present a set of experiments comparing our new algorithm against the previous existing exact algorithm using synthetic data. In addition, we present the first application of the exact median graph computation to real data and we compare the results against an approximate algorithm based on genetic search. These experimental results show that our algorithm outperforms the previous existing exact algorithm and in addition show the potential applicability of the exact solutions to real problems.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science Inc. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0167-8655 ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number DAG @ dag @ FVS2009a Serial 1114  
Permanent link to this record
 

 
Author Fadi Dornaika; Angel Sappa edit  doi
openurl 
  Title Instantaneous 3D motion from image derivatives using the Least Trimmed Square Regression Type Journal Article
  Year 2009 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 30 Issue 5 Pages 535–543  
  Keywords  
  Abstract This paper presents a new technique to the instantaneous 3D motion estimation. The main contributions are as follows. First, we show that the 3D camera or scene velocity can be retrieved from image derivatives only assuming that the scene contains a dominant plane. Second, we propose a new robust algorithm that simultaneously provides the Least Trimmed Square solution and the percentage of inliers-the non-contaminated data. Experiments on both synthetic and real image sequences demonstrated the effectiveness of the developed method. Those experiments show that the new robust approach can outperform classical robust schemes.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science Inc. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0167-8655 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DoS2009a Serial 1115  
Permanent link to this record
 

 
Author Debora Gil; Petia Radeva edit   pdf
doi  openurl
  Title Inhibition of false landmarks Type Journal Article
  Year 2006 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 27 Issue 9 Pages 1022-1030  
  Keywords  
  Abstract Corners and junctions are landmarks characterized by the lack of differentiability in the unit tangent to the image level curve. Detectors based on differential operators are not, by their own definition, the best posed as they require a higher degree of differentiability to yield a reliable response. We argue that a corner detector should be based on the degree of continuity of the tangent vector to the image level sets, work on the image domain and need no assumptions on neither the image local structure nor the particular geometry of the corner/junction. An operator measuring the degree of differentiability of the projection matrix on the image gradient fulfills the above requirements. Because using smoothing kernels leads to corner misplacement, we suggest an alternative fake response remover based on the receptive field inhibition of spurious details. The combination of both orientation discontinuity detection and noise inhibition produce our inhibition orientation energy (IOE) landmark locator.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science Inc. Place of Publication New York, NY, USA Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (up) 0167-8655 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;MILAB Approved no  
  Call Number IAM @ iam @ GiR2006 Serial 1529  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: