|
Lluis Pere de las Heras, Joan Mas, Gemma Sanchez and Ernest Valveny. 2011. Wall Patch-Based Segmentation in Architectural Floorplans. 11th International Conference on Document Analysis and Recognition.1270–1274.
Abstract: Segmentation of architectural floor plans is a challenging task, mainly because of the large variability in the notation between different plans. In general, traditional techniques, usually based on analyzing and grouping structural primitives obtained by vectorization, are only able to handle a reduced range of similar notations. In this paper we propose an alternative patch-based segmentation approach working at pixel level, without need of vectorization. The image is divided into a set of patches and a set of features is extracted for every patch. Then, each patch is assigned to a visual word of a previously learned vocabulary and given a probability of belonging to each class of objects. Finally, a post-process assigns the final label for every pixel. This approach has been applied to the detection of walls on two datasets of architectural floor plans with different notations, achieving high accuracy rates.
|
|
|
Marçal Rusiñol, David Aldavert, Ricardo Toledo and Josep Llados. 2011. Browsing Heterogeneous Document Collections by a Segmentation-Free Word Spotting Method. 11th International Conference on Document Analysis and Recognition.63–67.
Abstract: In this paper, we present a segmentation-free word spotting method that is able to deal with heterogeneous document image collections. We propose a patch-based framework where patches are represented by a bag-of-visual-words model powered by SIFT descriptors. A later refinement of the feature vectors is performed by applying the latent semantic indexing technique. The proposed method performs well on both handwritten and typewritten historical document images. We have also tested our method on documents written in non-Latin scripts.
|
|
|
Volkmar Frinken, Andreas Fischer, Horst Bunke and Alicia Fornes. 2011. Co-training for Handwritten Word Recognition. 11th International Conference on Document Analysis and Recognition.314–318.
Abstract: To cope with the tremendous variations of writing styles encountered between different individuals, unconstrained automatic handwriting recognition systems need to be trained on large sets of labeled data. Traditionally, the training data has to be labeled manually, which is a laborious and costly process. Semi-supervised learning techniques offer methods to utilize unlabeled data, which can be obtained cheaply in large amounts in order, to reduce the need for labeled data. In this paper, we propose the use of Co-Training for improving the recognition accuracy of two weakly trained handwriting recognition systems. The first one is based on Recurrent Neural Networks while the second one is based on Hidden Markov Models. On the IAM off-line handwriting database we demonstrate a significant increase of the recognition accuracy can be achieved with Co-Training for single word recognition.
|
|
|
Muhammad Muzzamil Luqman, Jean-Yves Ramel, Josep Llados and Thierry Brouard. 2011. Subgraph Spotting Through Explicit Graph Embedding: An Application to Content Spotting in Graphic Document Images. 11th International Conference on Document Analysis and Recognition.870–874.
Abstract: We present a method for spotting a subgraph in a graph repository. Subgraph spotting is a very interesting research problem for various application domains where the use of a relational data structure is mandatory. Our proposed method accomplishes subgraph spotting through graph embedding. We achieve automatic indexation of a graph repository during off-line learning phase, where we (i) break the graphs into 2-node sub graphs (a.k.a. cliques of order 2), which are primitive building-blocks of a graph, (ii) embed the 2-node sub graphs into feature vectors by employing our recently proposed explicit graph embedding technique, (iii) cluster the feature vectors in classes by employing a classic agglomerative clustering technique, (iv) build an index for the graph repository and (v) learn a Bayesian network classifier. The subgraph spotting is achieved during the on-line querying phase, where we (i) break the query graph into 2-node sub graphs, (ii) embed them into feature vectors, (iii) employ the Bayesian network classifier for classifying the query 2-node sub graphs and (iv) retrieve the respective graphs by looking-up in the index of the graph repository. The graphs containing all query 2-node sub graphs form the set of result graphs for the query. Finally, we employ the adjacency matrix of each result graph along with a score function, for spotting the query graph in it. The proposed subgraph spotting method is equally applicable to a wide range of domains, offering ease of query by example (QBE) and granularity of focused retrieval. Experimental results are presented for graphs generated from two repositories of electronic and architectural document images.
|
|
|
Anjan Dutta, Josep Llados and Umapada Pal. 2011. Symbol Spotting in Line Drawings Through Graph Paths Hashing. 11th International Conference on Document Analysis and Recognition.982–986.
Abstract: In this paper we propose a symbol spotting technique through hashing the shape descriptors of graph paths (Hamiltonian paths). Complex graphical structures in line drawings can be efficiently represented by graphs, which ease the accurate localization of the model symbol. Graph paths are the factorized substructures of graphs which enable robust recognition even in the presence of noise and distortion. In our framework, the entire database of the graphical documents is indexed in hash tables by the locality sensitive hashing (LSH) of shape descriptors of the paths. The hashing data structure aims to execute an approximate k-NN search in a sub-linear time. The spotting method is formulated by a spatial voting scheme to the list of locations of the paths that are decided during the hash table lookup process. We perform detailed experiments with various dataset of line drawings and the results demonstrate the effectiveness and efficiency of the technique.
|
|
|
Dimosthenis Karatzas, Sergi Robles, Joan Mas, Farshad Nourbakhsh and Partha Pratim Roy. 2011. ICDAR 2011 Robust Reading Competition – Challege 1: Reading Text in Born-Digital Images (Web and Email). 11th International Conference on Document Analysis and Recognition.1485–1490.
Abstract: This paper presents the results of the first Challenge of ICDAR 2011 Robust Reading Competition. Challenge 1 is focused on the extraction of text from born-digital images, specifically from images found in Web pages and emails. The challenge was organized in terms of three tasks that look at different stages of the process: text localization, text segmentation and word recognition. In this paper we present the results of the challenge for all three tasks, and make an open call for continuous participation outside the context of ICDAR 2011.
|
|
|
Alicia Fornes, Anjan Dutta, Albert Gordo and Josep Llados. 2011. The ICDAR 2011 Music Scores Competition: Staff Removal and Writer Identification. 11th International Conference on Document Analysis and Recognition.1511–1515.
Abstract: In the last years, there has been a growing interest in the analysis of handwritten music scores. In this sense, our goal has been to foster the interest in the analysis of handwritten music scores by the proposal of two different competitions: Staff removal and Writer Identification. Both competitions have been tested on the CVC-MUSCIMA database: a ground-truth of handwritten music score images. This paper describes the competition details, including the dataset and ground-truth, the evaluation metrics, and a short description of the participants, their methods, and the obtained results.
|
|
|
Sounak Dey, Anjan Dutta, Suman Ghosh, Ernest Valveny, Josep Llados and Umapada Pal. 2018. Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch. 24th International Conference on Pattern Recognition.916–921.
Abstract: In this work we introduce a cross modal image retrieval system that allows both text and sketch as input modalities for the query. A cross-modal deep network architecture is formulated to jointly model the sketch and text input modalities as well as the the image output modality, learning a common embedding between text and images and between sketches and images. In addition, an attention model is used to selectively focus the attention on the different objects of the image, allowing for retrieval with multiple objects in the query. Experiments show that the proposed method performs the best in both single and multiple object image retrieval in standard datasets.
|
|
|
Pau Riba, Andreas Fischer, Josep Llados and Alicia Fornes. 2018. Learning Graph Distances with Message Passing Neural Networks. 24th International Conference on Pattern Recognition.2239–2244.
Abstract: Graph representations have been widely used in pattern recognition thanks to their powerful representation formalism and rich theoretical background. A number of error-tolerant graph matching algorithms such as graph edit distance have been proposed for computing a distance between two labelled graphs. However, they typically suffer from a high
computational complexity, which makes it difficult to apply
these matching algorithms in a real scenario. In this paper, we propose an efficient graph distance based on the emerging field of geometric deep learning. Our method employs a message passing neural network to capture the graph structure and learns a metric with a siamese network approach. The performance of the proposed graph distance is validated in two application cases, graph classification and graph retrieval of handwritten words, and shows a promising performance when compared with
(approximate) graph edit distance benchmarks.
Keywords: ★Best Paper Award★
|
|
|
Pau Riba, Josep Llados, Alicia Fornes and Anjan Dutta. 2015. Large-scale Graph Indexing using Binary Embeddings of Node Contexts. In C.-L.Liu, B.Luo, W.G.Kropatsch and J.Cheng, eds. 10th IAPR-TC15 Workshop on Graph-based Representations in Pattern Recognition. Springer International Publishing, 208–217. (LNCS.)
Abstract: Graph-based representations are experiencing a growing usage in visual recognition and retrieval due to their representational power in front of classical appearance-based representations in terms of feature vectors. Retrieving a query graph from a large dataset of graphs has the drawback of the high computational complexity required to compare the query and the target graphs. The most important property for a large-scale retrieval is the search time complexity to be sub-linear in the number of database examples. In this paper we propose a fast indexation formalism for graph retrieval. A binary embedding is defined as hashing keys for graph nodes. Given a database of labeled graphs, graph nodes are complemented with vectors of attributes representing their local context. Hence, each attribute counts the length of a walk of order k originated in a vertex with label l. Each attribute vector is converted to a binary code applying a binary-valued hash function. Therefore, graph retrieval is formulated in terms of finding target graphs in the database whose nodes have a small Hamming distance from the query nodes, easily computed with bitwise logical operators. As an application example, we validate the performance of the proposed methods in a handwritten word spotting scenario in images of historical documents.
Keywords: Graph matching; Graph indexing; Application in document analysis; Word spotting; Binary embedding
|
|