|
Emanuel Indermühle, Volkmar Frinken, & Horst Bunke. (2012). Mode Detection in Online Handwritten Documents using BLSTM Neural Networks. In 13th International Conference on Frontiers in Handwriting Recognition (pp. 302–307).
Abstract: Mode detection in online handwritten documents refers to the process of distinguishing different types of contents, such as text, formulas, diagrams, or tables, one from another. In this paper a new approach to mode detection is proposed that uses bidirectional long-short term memory (BLSTM) neural networks. The BLSTM neural network is a novel type of recursive neural network that has been successfully applied in speech and handwriting recognition. In this paper we show that it has the potential to significantly outperform traditional methods for mode detection, which are usually based on stroke classification. As a further advantage over previous approaches, the proposed system is trainable and does not rely on user-defined heuristics. Moreover, it can be easily adapted to new or additional types of modes by just providing the system with new training data.
|
|
|
Jon Almazan, David Fernandez, Alicia Fornes, Josep Llados, & Ernest Valveny. (2012). A Coarse-to-Fine Approach for Handwritten Word Spotting in Large Scale Historical Documents Collection. In 13th International Conference on Frontiers in Handwriting Recognition (pp. 453–458).
Abstract: In this paper we propose an approach for word spotting in handwritten document images. We state the problem from a focused retrieval perspective, i.e. locating instances of a query word in a large scale dataset of digitized manuscripts. We combine two approaches, namely one based on word segmentation and another one segmentation-free. The first approach uses a hashing strategy to coarsely prune word images that are unlikely to be instances of the query word. This process is fast but has a low precision due to the errors introduced in the segmentation step. The regions containing candidate words are sent to the second process based on a state of the art technique from the visual object detection field. This discriminative model represents the appearance of the query word and computes a similarity score. In this way we propose a coarse-to-fine approach achieving a compromise between efficiency and accuracy. The validation of the model is shown using a collection of old handwritten manuscripts. We appreciate a substantial improvement in terms of precision regarding the previous proposed method with a low computational cost increase.
|
|
|
Laura Igual, Joan Carles Soliva, Antonio Hernandez, Sergio Escalera, Oscar Vilarroya, & Petia Radeva. (2012). Supervised Brain Segmentation and Classification in Diagnostic of Attention-Deficit/Hyperactivity Disorder. In High Performance Computing and Simulation, International Conference on (pp. 182–187). IEEE Xplore.
Abstract: This paper presents an automatic method for external and internal segmentation of the caudate nucleus in Magnetic Resonance Images (MRI) based on statistical and structural machine learning approaches. This method is applied in Attention-Deficit/Hyperactivity Disorder (ADHD) diagnosis. The external segmentation method adapts the Graph Cut energy-minimization model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus. In particular, new energy function data and boundary potentials are defined and a supervised energy term based on contextual brain structures is added. Furthermore, the internal segmentation method learns a classifier based on shape features of the Region of Interest (ROI) in MRI slices. The results show accurate external and internal caudate segmentation in a real data set and similar performance of ADHD diagnostic test to manual annotation.
|
|
|
Petia Radeva, Michal Drozdzal, Santiago Segui, Laura Igual, Carolina Malagelada, Fernando Azpiroz, et al. (2012). Active labeling: Application to wireless endoscopy analysis. In High Performance Computing and Simulation, International Conference on (pp. 174–181).
Abstract: Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.
|
|
|
Albert Gordo, Florent Perronnin, & Ernest Valveny. (2012). Document classification using multiple views. In 10th IAPR International Workshop on Document Analysis Systems (pp. 33–37). IEEE Computer Society Washington.
Abstract: The combination of multiple features or views when representing documents or other kinds of objects usually leads to improved results in classification (and retrieval) tasks. Most systems assume that those views will be available both at training and test time. However, some views may be too `expensive' to be available at test time. In this paper, we consider the use of Canonical Correlation Analysis to leverage `expensive' views that are available only at training time. Experimental results show that this information may significantly improve the results in a classification task.
|
|
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Horst Bunke. (2012). Writer Identification in Old Handwritten Music Scores. In Copnstantin Papaodysseus (Ed.), Pattern Recognition and Signal Processing in Archaeometry: Mathematical and Computational Solutions for Archaeology (pp. 27–63). IGI-Global.
Abstract: The aim of writer identification is determining the writer of a piece of handwriting from a set of writers. In this paper we present a system for writer identification in old handwritten music scores. Even though an important amount of compositions contains handwritten text in the music scores, the aim of our work is to use only music notation to determine the author. The steps of the system proposed are the following. First of all, the music sheet is preprocessed and normalized for obtaining a single binarized music line, without the staff lines. Afterwards, 100 features are extracted for every music line, which are subsequently used in a k-NN classifier that compares every feature vector with prototypes stored in a database. By applying feature selection and extraction methods on the original feature set, the performance is increased. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving a recognition rate of about 95%.
|
|
|
Albert Gordo, Jose Antonio Rodriguez, Florent Perronnin, & Ernest Valveny. (2012). Leveraging category-level labels for instance-level image retrieval. In 25th IEEE Conference on Computer Vision and Pattern Recognition (pp. 3045–3052). IEEE Xplore.
Abstract: In this article, we focus on the problem of large-scale instance-level image retrieval. For efficiency reasons, it is common to represent an image by a fixed-length descriptor which is subsequently encoded into a small number of bits. We note that most encoding techniques include an unsupervised dimensionality reduction step. Our goal in this work is to learn a better subspace in a supervised manner. We especially raise the following question: “can category-level labels be used to learn such a subspace?” To answer this question, we experiment with four learning techniques: the first one is based on a metric learning framework, the second one on attribute representations, the third one on Canonical Correlation Analysis (CCA) and the fourth one on Joint Subspace and Classifier Learning (JSCL). While the first three approaches have been applied in the past to the image retrieval problem, we believe we are the first to show the usefulness of JSCL in this context. In our experiments, we use ImageNet as a source of category-level labels and report retrieval results on two standard dataseis: INRIA Holidays and the University of Kentucky benchmark. Our experimental study shows that metric learning and attributes do not lead to any significant improvement in retrieval accuracy, as opposed to CCA and JSCL. As an example, we report on Holidays an increase in accuracy from 39.3% to 48.6% with 32-dimensional representations. Overall JSCL is shown to yield the best results.
|
|
|
Murad Al Haj, Jordi Gonzalez, & Larry S. Davis. (2012). On Partial Least Squares in Head Pose Estimation: How to simultaneously deal with misalignment. In 25th IEEE Conference on Computer Vision and Pattern Recognition (pp. 2602–2609). IEEE Xplore.
Abstract: Head pose estimation is a critical problem in many computer vision applications. These include human computer interaction, video surveillance, face and expression recognition. In most prior work on heads pose estimation, the positions of the faces on which the pose is to be estimated are specified manually. Therefore, the results are reported without studying the effect of misalignment. We propose a method based on partial least squares (PLS) regression to estimate pose and solve the alignment problem simultaneously. The contributions of this paper are two-fold: 1) we show that the kernel version of PLS (kPLS) achieves better than state-of-the-art results on the estimation problem and 2) we develop a technique to reduce misalignment based on the learned PLS factors.
|
|
|
Jose Carlos Rubio, Joan Serrat, & Antonio Lopez. (2012). Unsupervised co-segmentation through region matching. In 25th IEEE Conference on Computer Vision and Pattern Recognition (pp. 749–756). IEEE Xplore.
Abstract: Co-segmentation is defined as jointly partitioning multiple images depicting the same or similar object, into foreground and background. Our method consists of a multiple-scale multiple-image generative model, which jointly estimates the foreground and background appearance distributions from several images, in a non-supervised manner. In contrast to other co-segmentation methods, our approach does not require the images to have similar foregrounds and different backgrounds to function properly. Region matching is applied to exploit inter-image information by establishing correspondences between the common objects that appear in the scene. Moreover, computing many-to-many associations of regions allow further applications, like recognition of object parts across images. We report results on iCoseg, a challenging dataset that presents extreme variability in camera viewpoint, illumination and object deformations and poses. We also show that our method is robust against large intra-class variability in the MSRC database.
|
|
|
Antonio Hernandez, Nadezhda Zlateva, Alexander Marinov, Miguel Reyes, Petia Radeva, Dimo Dimov, et al. (2012). Graph Cuts Optimization for Multi-Limb Human Segmentation in Depth Maps. In 25th IEEE Conference on Computer Vision and Pattern Recognition (pp. 726–732). IEEE Xplore.
Abstract: We present a generic framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs in depth maps. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α-β swap Graph-cuts algorithm. Moreover, depth of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches.
|
|
|
Mohammad Ali Bagheri, Qigang Gao, & Sergio Escalera. (2012). Error Correcting Output Codes for multiclass classification: Application to two image vision problems. In 16th symposium on Artificial Intelligence & Signal Processing (pp. 508–513). IEEE Xplore.
Abstract: Error-correcting output codes (ECOC) represents a powerful framework to deal with multiclass classification problems based on combining binary classifiers. The key factor affecting the performance of ECOC methods is the independence of binary classifiers, without which the ECOC method would be ineffective. In spite of its ability on classification of problems with relatively large number of classes, it has been applied in few real world problems. In this paper, we investigate the behavior of the ECOC approach on two image vision problems: logo recognition and shape classification using Decision Tree and AdaBoost as the base learners. The results show that the ECOC method can be used to improve the classification performance in comparison with the classical multiclass approaches.
|
|
|
Ernest Valveny, Robert Benavente, Agata Lapedriza, Miquel Ferrer, Jaume Garcia, & Gemma Sanchez. (2012). Adaptation of a computer programming course to the EXHE requirements: evaluation five years later (Vol. 37).
|
|
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2012). Feature Selection on Node Statistics Based Embedding of Graphs. PRL - Pattern Recognition Letters, 33(15), 1980–1990.
Abstract: Representing a graph with a feature vector is a common way of making statistical machine learning algorithms applicable to the domain of graphs. Such a transition from graphs to vectors is known as graphembedding. A key issue in graphembedding is to select a proper set of features in order to make the vectorial representation of graphs as strong and discriminative as possible. In this article, we propose features that are constructed out of frequencies of node label representatives. We first build a large set of features and then select the most discriminative ones according to different ranking criteria and feature transformation algorithms. On different classification tasks, we experimentally show that only a small significant subset of these features is needed to achieve the same classification rates as competing to state-of-the-art methods.
Keywords: Structural pattern recognition; Graph embedding; Feature ranking; PCA; Graph classification
|
|
|
Mario Hernandez, Joao Sanchez, & Jordi Vitria. (2012). Selected papers from Iberian Conference on Pattern Recognition and Image Analysis (Vol. 45).
|
|
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2012). Graph Embedding in Vector Spaces by Node Attribute Statistics. PR - Pattern Recognition, 45(9), 3072–3083.
Abstract: Graph-based representations are of broad use and applicability in pattern recognition. They exhibit, however, a major drawback with regards to the processing tools that are available in their domain. Graphembedding into vectorspaces is a growing field among the structural pattern recognition community which aims at providing a feature vector representation for every graph, and thus enables classical statistical learning machinery to be used on graph-based input patterns. In this work, we propose a novel embedding methodology for graphs with continuous nodeattributes and unattributed edges. The approach presented in this paper is based on statistics of the node labels and the edges between them, based on their similarity to a set of representatives. We specifically deal with an important issue of this methodology, namely, the selection of a suitable set of representatives. In an experimental evaluation, we empirically show the advantages of this novel approach in the context of different classification problems using several databases of graphs.
Keywords: Structural pattern recognition; Graph embedding; Data clustering; Graph classification
|
|