|
Marçal Rusiñol, R.Roset, Josep Llados and C.Montaner. 2011. Automatic Index Generation of Digitized Map Series by Coordinate Extraction and Interpretation. In Proceedings of the Sixth International Workshop on Digital Technologies in Cartographic Heritage.
|
|
|
Jon Almazan, Alicia Fornes and Ernest Valveny. 2012. A non-rigid appearance model for shape description and recognition. PR, 45(9), 3105–3113.
Abstract: In this paper we describe a framework to learn a model of shape variability in a set of patterns. The framework is based on the Active Appearance Model (AAM) and permits to combine shape deformations with appearance variability. We have used two modifications of the Blurred Shape Model (BSM) descriptor as basic shape and appearance features to learn the model. These modifications permit to overcome the rigidity of the original BSM, adapting it to the deformations of the shape to be represented. We have applied this framework to representation and classification of handwritten digits and symbols. We show that results of the proposed methodology outperform the original BSM approach.
Keywords: Shape recognition; Deformable models; Shape modeling; Hand-drawn recognition
|
|
|
Jon Almazan, David Fernandez, Alicia Fornes, Josep Llados and Ernest Valveny. 2012. A Coarse-to-Fine Approach for Handwritten Word Spotting in Large Scale Historical Documents Collection. 13th International Conference on Frontiers in Handwriting Recognition.453–458.
Abstract: In this paper we propose an approach for word spotting in handwritten document images. We state the problem from a focused retrieval perspective, i.e. locating instances of a query word in a large scale dataset of digitized manuscripts. We combine two approaches, namely one based on word segmentation and another one segmentation-free. The first approach uses a hashing strategy to coarsely prune word images that are unlikely to be instances of the query word. This process is fast but has a low precision due to the errors introduced in the segmentation step. The regions containing candidate words are sent to the second process based on a state of the art technique from the visual object detection field. This discriminative model represents the appearance of the query word and computes a similarity score. In this way we propose a coarse-to-fine approach achieving a compromise between efficiency and accuracy. The validation of the model is shown using a collection of old handwritten manuscripts. We appreciate a substantial improvement in terms of precision regarding the previous proposed method with a low computational cost increase.
|
|
|
Jon Almazan, Albert Gordo, Alicia Fornes and Ernest Valveny. 2012. Efficient Exemplar Word Spotting. 23rd British Machine Vision Conference.67.1–67.11.
Abstract: In this paper we propose an unsupervised segmentation-free method for word spotting in document images.
Documents are represented with a grid of HOG descriptors, and a sliding window approach is used to locate the document regions that are most similar to the query. We use the exemplar SVM framework to produce a better representation of the query in an unsupervised way. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage.
|
|
|
Jaume Gibert, Ernest Valveny and Horst Bunke. 2012. Graph Embedding in Vector Spaces by Node Attribute Statistics. PR, 45(9), 3072–3083.
Abstract: Graph-based representations are of broad use and applicability in pattern recognition. They exhibit, however, a major drawback with regards to the processing tools that are available in their domain. Graphembedding into vectorspaces is a growing field among the structural pattern recognition community which aims at providing a feature vector representation for every graph, and thus enables classical statistical learning machinery to be used on graph-based input patterns. In this work, we propose a novel embedding methodology for graphs with continuous nodeattributes and unattributed edges. The approach presented in this paper is based on statistics of the node labels and the edges between them, based on their similarity to a set of representatives. We specifically deal with an important issue of this methodology, namely, the selection of a suitable set of representatives. In an experimental evaluation, we empirically show the advantages of this novel approach in the context of different classification problems using several databases of graphs.
Keywords: Structural pattern recognition; Graph embedding; Data clustering; Graph classification
|
|
|
Jaume Gibert, Ernest Valveny and Horst Bunke. 2012. Feature Selection on Node Statistics Based Embedding of Graphs. PRL, 33(15), 1980–1990.
Abstract: Representing a graph with a feature vector is a common way of making statistical machine learning algorithms applicable to the domain of graphs. Such a transition from graphs to vectors is known as graphembedding. A key issue in graphembedding is to select a proper set of features in order to make the vectorial representation of graphs as strong and discriminative as possible. In this article, we propose features that are constructed out of frequencies of node label representatives. We first build a large set of features and then select the most discriminative ones according to different ranking criteria and feature transformation algorithms. On different classification tasks, we experimentally show that only a small significant subset of these features is needed to achieve the same classification rates as competing to state-of-the-art methods.
Keywords: Structural pattern recognition; Graph embedding; Feature ranking; PCA; Graph classification
|
|
|
Sophie Wuerger, Kaida Xiao, Dimitris Mylonas, Q. Huang, Dimosthenis Karatzas and Galina Paramei. 2012. Blue green color categorization in mandarin english speakers. JOSA A, 29(2), A102–A1207.
Abstract: Observers are faster to detect a target among a set of distracters if the targets and distracters come from different color categories. This cross-boundary advantage seems to be limited to the right visual field, which is consistent with the dominance of the left hemisphere for language processing [Gilbert et al., Proc. Natl. Acad. Sci. USA 103, 489 (2006)]. Here we study whether a similar visual field advantage is found in the color identification task in speakers of Mandarin, a language that uses a logographic system. Forty late Mandarin-English bilinguals performed a blue-green color categorization task, in a blocked design, in their first language (L1: Mandarin) or second language (L2: English). Eleven color singletons ranging from blue to green were presented for 160 ms, randomly in the left visual field (LVF) or right visual field (RVF). Color boundary and reaction times (RTs) at the color boundary were estimated in L1 and L2, for both visual fields. We found that the color boundary did not differ between the languages; RTs at the color boundary, however, were on average more than 100 ms shorter in the English compared to the Mandarin sessions, but only when the stimuli were presented in the RVF. The finding may be explained by the script nature of the two languages: Mandarin logographic characters are analyzed visuospatially in the right hemisphere, which conceivably facilitates identification of color presented to the LVF.
|
|
|
Yunchao Gong, Svetlana Lazebnik, Albert Gordo and Florent Perronnin. 2012. Iterative quantization: A procrustean approach to learning binary codes for Large-Scale Image Retrieval. TPAMI, 35(12), 2916–2929.
Abstract: This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or “classemes” on the ImageNet dataset.
|
|
|
Dimosthenis Karatzas and Ch. Lioutas. 1998. Software Package Development for Electron Diffraction Image Analysis. Proceedings of the XIV Solid State Physics National Conference.
|
|
|
Albert Gordo, Florent Perronnin and Ernest Valveny. 2012. Document classification using multiple views. 10th IAPR International Workshop on Document Analysis Systems. IEEE Computer Society Washington, 33–37.
Abstract: The combination of multiple features or views when representing documents or other kinds of objects usually leads to improved results in classification (and retrieval) tasks. Most systems assume that those views will be available both at training and test time. However, some views may be too `expensive' to be available at test time. In this paper, we consider the use of Canonical Correlation Analysis to leverage `expensive' views that are available only at training time. Experimental results show that this information may significantly improve the results in a classification task.
|
|