|
Lluis Pere de las Heras, Joan Mas, Gemma Sanchez and Ernest Valveny. 2011. Wall Patch-Based Segmentation in Architectural Floorplans. 11th International Conference on Document Analysis and Recognition.1270–1274.
Abstract: Segmentation of architectural floor plans is a challenging task, mainly because of the large variability in the notation between different plans. In general, traditional techniques, usually based on analyzing and grouping structural primitives obtained by vectorization, are only able to handle a reduced range of similar notations. In this paper we propose an alternative patch-based segmentation approach working at pixel level, without need of vectorization. The image is divided into a set of patches and a set of features is extracted for every patch. Then, each patch is assigned to a visual word of a previously learned vocabulary and given a probability of belonging to each class of objects. Finally, a post-process assigns the final label for every pixel. This approach has been applied to the detection of walls on two datasets of architectural floor plans with different notations, achieving high accuracy rates.
|
|
|
Dimosthenis Karatzas, Sergi Robles, Joan Mas, Farshad Nourbakhsh and Partha Pratim Roy. 2011. ICDAR 2011 Robust Reading Competition – Challege 1: Reading Text in Born-Digital Images (Web and Email). 11th International Conference on Document Analysis and Recognition.1485–1490.
Abstract: This paper presents the results of the first Challenge of ICDAR 2011 Robust Reading Competition. Challenge 1 is focused on the extraction of text from born-digital images, specifically from images found in Web pages and emails. The challenge was organized in terms of three tasks that look at different stages of the process: text localization, text segmentation and word recognition. In this paper we present the results of the challenge for all three tasks, and make an open call for continuous participation outside the context of ICDAR 2011.
|
|
|
Anjan Dutta, Josep Llados, Horst Bunke and Umapada Pal. 2013. Near Convex Region Adjacency Graph and Approximate Neighborhood String Matching for Symbol Spotting in Graphical Documents. 12th International Conference on Document Analysis and Recognition.1078–1082.
Abstract: This paper deals with a subgraph matching problem in Region Adjacency Graph (RAG) applied to symbol spotting in graphical documents. RAG is a very important, efficient and natural way of representing graphical information with a graph but this is limited to cases where the information is well defined with perfectly delineated regions. What if the information we are interested in is not confined within well defined regions? This paper addresses this particular problem and solves it by defining near convex grouping of oriented line segments which results in near convex regions. Pure convexity imposes hard constraints and can not handle all the cases efficiently. Hence to solve this problem we have defined a new type of convexity of regions, which allows convex regions to have concavity to some extend. We call this kind of regions Near Convex Regions (NCRs). These NCRs are then used to create the Near Convex Region Adjacency Graph (NCRAG) and with this representation we have formulated the problem of symbol spotting in graphical documents as a subgraph matching problem. For subgraph matching we have used the Approximate Edit Distance Algorithm (AEDA) on the neighborhood string, which starts working after finding a key node in the input or target graph and iteratively identifies similar nodes of the query graph in the neighborhood of the key node. The experiments are performed on artificial, real and distorted datasets.
|
|
|
D. Perez, L. Tarazon, N. Serrano, F.M. Castro, Oriol Ramos Terrades and A. Juan. 2009. The GERMANA Database. 10th International Conference on Document Analysis and Recognition.301–305.
Abstract: A new handwritten text database, GERMANA, is presented to facilitate empirical comparison of different approaches to text line extraction and off-line handwriting recognition. GERMANA is the result of digitising and annotating a 764-page Spanish manuscript from 1891, in which most pages only contain nearly calligraphed text written on ruled sheets of well-separated lines. To our knowledge, it is the first publicly available database for handwriting research, mostly written in Spanish and comparable in size to standard databases. Due to its sequential book structure, it is also well-suited for realistic assessment of interactive handwriting recognition systems. To provide baseline results for reference in future studies, empirical results are also reported, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling.
|
|
|
M. Visani, V.C.Kieu, Alicia Fornes and N.Journet. 2013. The ICDAR 2013 Music Scores Competition: Staff Removal. 12th International Conference on Document Analysis and Recognition.1439–1443.
Abstract: The first competition on music scores that was organized at ICDAR in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we focus on the staff removal task and simulate a real case scenario: old music scores. For this purpose, we have generated a new set of images using two kinds of degradations: local noise and 3D distortions. This paper describes the dataset, distortion methods, evaluation metrics, the participant's methods and the obtained results.
|
|
|
Marçal Rusiñol, T.Benkhelfallah and V. Poulain d'Andecy. 2013. Field Extraction from Administrative Documents by Incremental Structural Templates. 12th International Conference on Document Analysis and Recognition.1100–1104.
Abstract: In this paper we present an incremental framework aimed at extracting field information from administrative document images in the context of a Digital Mail-room scenario. Given a single training sample in which the user has marked which fields have to be extracted from a particular document class, a document model representing structural relationships among words is built. This model is incrementally refined as the system processes more and more documents from the same class. A reformulation of the tf-idf statistic scheme allows to adjust the importance weights of the structural relationships among words. We report in the experimental section our results obtained with a large dataset of real invoices.
|
|
|
Albert Gordo, Marçal Rusiñol, Dimosthenis Karatzas and Andrew Bagdanov. 2013. Document Classification and Page Stream Segmentation for Digital Mailroom Applications. 12th International Conference on Document Analysis and Recognition.621–625.
Abstract: In this paper we present a method for the segmentation of continuous page streams into multipage documents and the simultaneous classification of the resulting documents. We first present an approach to combine the multiple pages of a document into a single feature vector that represents the whole document. Despite its simplicity and low computational cost, the proposed representation yields results comparable to more complex methods in multipage document classification tasks. We then exploit this representation in the context of page stream segmentation. The most plausible segmentation of a page stream into a sequence of multipage documents is obtained by optimizing a statistical model that represents the probability of each segmented multipage document belonging to a particular class. Experimental results are reported on a large sample of real administrative multipage documents.
|
|
|
L. Rothacker, Marçal Rusiñol and G.A. Fink. 2013. Bag-of-Features HMMs for segmentation-free word spotting in handwritten documents. 12th International Conference on Document Analysis and Recognition.1305–1309.
Abstract: Recent HMM-based approaches to handwritten word spotting require large amounts of learning samples and mostly rely on a prior segmentation of the document. We propose to use Bag-of-Features HMMs in a patch-based segmentation-free framework that are estimated by a single sample. Bag-of-Features HMMs use statistics of local image feature representatives. Therefore they can be considered as a variant of discrete HMMs allowing to model the observation of a number of features at a point in time. The discrete nature enables us to estimate a query model with only a single example of the query provided by the user. This makes our method very flexible with respect to the availability of training data. Furthermore, we are able to outperform state-of-the-art results on the George Washington dataset.
|
|
|
Thanh Ha Do, Salvatore Tabbone and Oriol Ramos Terrades. 2013. New Approach for Symbol Recognition Combining Shape Context of Interest Points with Sparse Representation. 12th International Conference on Document Analysis and Recognition.265–269.
Abstract: In this paper, we propose a new approach for symbol description. Our method is built based on the combination of shape context of interest points descriptor and sparse representation. More specifically, we first learn a dictionary describing shape context of interest point descriptors. Then, based on information retrieval techniques, we build a vector model for each symbol based on its sparse representation in a visual vocabulary whose visual words are columns in the learneddictionary. The retrieval task is performed by ranking symbols based on similarity between vector models. Evaluation of our method, using benchmark datasets, demonstrates the validity of our approach and shows that it outperforms related state-of-theart methods.
|
|
|
R. Bertrand, P. Gomez-Krämer, Oriol Ramos Terrades, P. Franco and Jean-Marc Ogier. 2013. A System Based On Intrinsic Features for Fraudulent Document Detection. 12th International Conference on Document Analysis and Recognition.106–110.
Abstract: Paper documents still represent a large amount of information supports used nowadays and may contain critical data. Even though official documents are secured with techniques such as printed patterns or artwork, paper documents suffer froma lack of security.
However, the high availability of cheap scanning and printing hardware allows non-experts to easily create fake documents. As the use of a watermarking system added during the document production step is hardly possible, solutions have to be proposed to distinguish a genuine document from a forged one.
In this paper, we present an automatic forgery detection method based on document’s intrinsic features at character level. This method is based on the one hand on outlier character detection in a discriminant feature space and on the other hand on the detection of strictly similar characters. Therefore, a feature set iscomputed for all characters. Then, based on a distance between characters of the same class.
Keywords: paper document; document analysis; fraudulent document; forgery; fake
|
|