|
Gemma Sanchez, Ernest Valveny, Josep Llados, Joan Mas and N. Lozano. 2004. A platform to extract knowledge from graphic documents. Application to an architectural sketch understanding scenario.
|
|
|
Gemma Sanchez and 6 others. 2003. A system for virtual prototyping of architectural projects. Proceedings of Fifth IAPR International Workshop on Pattern Recognition.65–74.
|
|
|
Gemma Sanchez, Alicia Fornes, Joan Mas and Josep Llados. 2007. Computer Vision Tools for Visually Impaired Children Learning.
|
|
|
Gemma Sanchez, Alicia Fornes, Joan Mas and Josep Llados. 2007. Computer Vision Tools for Visually Impaired Children Learning.
|
|
|
G.Thorvaldsen and 6 others. 2015. A Tale of two Transcriptions.
Abstract: non-indexed
This article explains how two projects implement semi-automated transcription routines: for census sheets in Norway and marriage protocols from Barcelona. The Spanish system was created to transcribe the marriage license books from 1451 to 1905 for the Barcelona area; one of the world’s longest series of preserved vital records. Thus, in the Project “Five Centuries of Marriages” (5CofM) at the Autonomous University of Barcelona’s Center for Demographic Studies, the Barcelona Historical Marriage Database has been built. More than 600,000 records were transcribed by 150 transcribers working online. The Norwegian material is cross-sectional as it is the 1891 census, recorded on one sheet per person. This format and the underlining of keywords for several variables made it more feasible to semi-automate data entry than when many persons are listed on the same page. While Optical Character Recognition (OCR) for printed text is scientifically mature, computer vision research is now focused on more difficult problems such as handwriting recognition. In the marriage project, document analysis methods have been proposed to automatically recognize the marriage licenses. Fully automatic recognition is still a challenge, but some promising results have been obtained. In Spain, Norway and elsewhere the source material is available as scanned pictures on the Internet, opening up the possibility for further international cooperation concerning automating the transcription of historic source materials. Like what is being done in projects to digitize printed materials, the optimal solution is likely to be a combination of manual transcription and machine-assisted recognition also for hand-written sources.
Keywords: Nominative Sources; Census; Vital Records; Computer Vision; Optical Character Recognition; Word Spotting
|
|
|
Francisco Cruz and Oriol Ramos Terrades. 2013. Handwritten Line Detection via an EM Algorithm. 12th International Conference on Document Analysis and Recognition.718–722.
Abstract: In this paper we present a handwritten line segmentation method devised to work on documents composed of several paragraphs with multiple line orientations. The method is based on a variation of the EM algorithm for the estimation of a set of regression lines between the connected components that compose the image. We evaluated our method on the ICDAR2009 handwriting segmentation contest dataset with promising results that overcome most of the presented methods. In addition, we prove the usability of the presented method by performing line segmentation on the George Washington database obtaining encouraging results.
|
|
|
Francisco Cruz and Oriol Ramos Terrades. 2012. Document segmentation using relative location features. 21st International Conference on Pattern Recognition.1562–1565.
Abstract: In this paper we evaluate the use of Relative Location Features (RLF) on a historical document segmentation task, and compare the quality of the results obtained on structured and unstructured documents using RLF and not using them. We prove that using these features improve the final segmentation on documents with a strong structure, while their application on unstructured documents does not show significant improvement. Although this paper is not focused on segmenting unstructured documents, results obtained on a benchmark dataset are equal or even overcome previous results of similar works.
|
|
|
Francisco Cruz and Oriol Ramos Terrades. 2014. EM-Based Layout Analysis Method for Structured Documents. 22nd International Conference on Pattern Recognition.315–320.
Abstract: In this paper we present a method to perform layout analysis in structured documents. We proposed an EM-based algorithm to fit a set of Gaussian mixtures to the different regions according to the logical distribution along the page. After the convergence, we estimate the final shape of the regions according
to the parameters computed for each component of the mixture. We evaluated our method in the task of record detection in a collection of historical structured documents and performed a comparison with other previous works in this task.
|
|
|
Francisco Cruz and Oriol Ramos Terrades. 2018. A probabilistic framework for handwritten text line segmentation.
Abstract: We successfully combine Expectation-Maximization algorithm and variational
approaches for parameter learning and computing inference on Markov random fields. This is a general method that can be applied to many computer
vision tasks. In this paper, we apply it to handwritten text line segmentation.
We conduct several experiments that demonstrate that our method deal with
common issues of this task, such as complex document layout or non-latin
scripts. The obtained results prove that our method achieve state-of-theart performance on different benchmark datasets without any particular fine
tuning step.
Keywords: Document Analysis; Text Line Segmentation; EM algorithm; Probabilistic Graphical Models; Parameter Learning
|
|
|
Francisco Cruz. 2016. Probabilistic Graphical Models for Document Analysis. (Ph.D. thesis, Ediciones Graficas Rey.)
Abstract: Latest advances in digitization techniques have fostered the interest in creating digital copies of collections of documents. Digitized documents permit an easy maintenance, loss-less storage, and efficient ways for transmission and to perform information retrieval processes. This situation has opened a new market niche to develop systems able to automatically extract and analyze information contained in these collections, specially in the ambit of the business activity.
Due to the great variety of types of documents this is not a trivial task. For instance, the automatic extraction of numerical data from invoices differs substantially from a task of text recognition in historical documents. However, in order to extract the information of interest, is always necessary to identify the area of the document where it is located. In the area of Document Analysis we refer to this process as layout analysis, which aims at identifying and categorizing the different entities that compose the document, such as text regions, pictures, text lines, or tables, among others. To perform this task it is usually necessary to incorporate a prior knowledge about the task into the analysis process, which can be modeled by defining a set of contextual relations between the different entities of the document. The use of context has proven to be useful to reinforce the recognition process and improve the results on many computer vision tasks. It presents two fundamental questions: What kind of contextual information is appropriate for a given task, and how to incorporate this information into the models.
In this thesis we study several ways to incorporate contextual information to the task of document layout analysis, and to the particular case of handwritten text line segmentation. We focus on the study of Probabilistic Graphical Models and other mechanisms for this purpose, and propose several solutions to these problems. First, we present a method for layout analysis based on Conditional Random Fields. With this model we encode local contextual relations between variables, such as pair-wise constraints. Besides, we encode a set of structural relations between different classes of regions at feature level. Second, we present a method based on 2D-Probabilistic Context-free Grammars to encode structural and hierarchical relations. We perform a comparative study between Probabilistic Graphical Models and this syntactic approach. Third, we propose a method for structured documents based on Bayesian Networks to represent the document structure, and an algorithm based in the Expectation-Maximization to find the best configuration of the page. We perform a thorough evaluation of the proposed methods on two particular collections of documents: a historical collection composed of ancient structured documents, and a collection of contemporary documents. In addition, we present a general method for the task of handwritten text line segmentation. We define a probabilistic framework where we combine the EM algorithm with variational approaches for computing inference and parameter learning on a Markov Random Field. We evaluate our method on several collections of documents, including a general dataset of annotated administrative documents. Results demonstrate the applicability of our method to real problems, and the contribution of the use of contextual information to this kind of problems.
|
|