|
Antonio Clavelli and Dimosthenis Karatzas. 2009. Text Segmentation in Colour Posters from the Spanish Civil War Era. 10th International Conference on Document Analysis and Recognition.181–185.
Abstract: The extraction of textual content from colour documents of a graphical nature is a complicated task. The text can be rendered in any colour, size and orientation while the existence of complex background graphics with repetitive patterns can make its localization and segmentation extremely difficult.
Here, we propose a new method for extracting textual content from such colour images that makes no assumption as to the size of the characters, their orientation or colour, while it is tolerant to characters that do not follow a straight baseline. We evaluate this method on a collection of documents with historical
connotations: the Posters from the Spanish Civil War.
|
|
|
H. Chouaib, Salvatore Tabbone, Oriol Ramos Terrades, F. Cloppet, N. Vincent and A.T. Thierry Paquet. 2008. Sélection de Caractéristiques à partir d'un algorithme génétique et d'une combinaison de classifieurs Adaboost. Colloque International Francophone sur l'Ecrit et le Document.181–186.
|
|
|
Marçal Rusiñol, J. Chazalon and Jean-Marc Ogier. 2014. Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images. 11th IAPR International Workshop on Document Analysis and Systems.181–185.
Abstract: Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given.
|
|
|
Thanh Ha Do, Oriol Ramos Terrades and Salvatore Tabbone. 2019. DSD: document sparse-based denoising algorithm. PAA, 22(1), 177–186.
Abstract: In this paper, we present a sparse-based denoising algorithm for scanned documents. This method can be applied to any kind of scanned documents with satisfactory results. Unlike other approaches, the proposed approach encodes noise documents through sparse representation and visual dictionary learning techniques without any prior noise model. Moreover, we propose a precision parameter estimator. Experiments on several datasets demonstrate the robustness of the proposed approach compared to the state-of-the-art methods on document denoising.
Keywords: Document denoising; Sparse representations; Sparse dictionary learning; Document degradation models
|
|
|
Joan Mas, J.A. Jorge, Gemma Sanchez and Josep Llados. 2008. Representing and Parsing Sketched Symbols using Adjacency Grammars and a Grid-Directed Parser. In W. Liu, J.L., J.M. Ogier, ed. Graphics Recognition: Recent Advances and New Opportunities,.176–187. (LNCS.)
|
|
|
Arnau Baro, Pau Riba and Alicia Fornes. 2022. Musigraph: Optical Music Recognition Through Object Detection and Graph Neural Network. Frontiers in Handwriting Recognition. International Conference on Frontiers in Handwriting Recognition (ICFHR2022).171–184. (LNCS.)
Abstract: During the last decades, the performance of optical music recognition has been increasingly improving. However, and despite the 2-dimensional nature of music notation (e.g. notes have rhythm and pitch), most works treat musical scores as a sequence of symbols in one dimension, which make their recognition still a challenge. Thus, in this work we explore the use of graph neural networks for musical score recognition. First, because graphs are suited for n-dimensional representations, and second, because the combination of graphs with deep learning has shown a great performance in similar applications. Our methodology consists of: First, we will detect each isolated/atomic symbols (those that can not be decomposed in more graphical primitives) and the primitives that form a musical symbol. Then, we will build the graph taking as root node the notehead and as leaves those primitives or symbols that modify the note’s rhythm (stem, beam, flag) or pitch (flat, sharp, natural). Finally, the graph is translated into a human-readable character sequence for a final transcription and evaluation. Our method has been tested on more than five thousand measures, showing promising results.
Keywords: Object detection; Optical music recognition; Graph neural network
|
|
|
Muhammad Muzzamil Luqman, Thierry Brouard, Jean-Yves Ramel and Josep Llados. 2010. Vers une approche foue of encapsulation de graphes: application a la reconnaissance de symboles. Colloque International Francophone sur l'Écrit et le Document.169–184.
Abstract: We present a new methodology for symbol recognition, by employing a structural approach for representing visual associations in symbols and a statistical classifier for recognition. A graphic symbol is vectorized, its topological and geometrical details are encoded by an attributed relational graph and a signature is computed for it. Data adapted fuzzy intervals have been introduced for addressing the sensitivity of structural representations to noise. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set, and is deployed in a supervised learning scenario for recognizing query symbols. Experimental results on pre-segmented 2D linear architectural and electronic symbols from GREC databases are presented.
Keywords: Fuzzy interval; Graph embedding; Bayesian network; Symbol recognition
|
|
|
Sebastien Mace, Herve Locteau, Ernest Valveny and Salvatore Tabbone. 2010. A system to detect rooms in architectural floor plan images. 9th IAPR International Workshop on Document Analysis Systems.167–174.
Abstract: In this article, a system to detect rooms in architectural floor plan images is described. We first present a primitive extraction algorithm for line detection. It is based on an original coupling of classical Hough transform with image vectorization in order to perform robust and efficient line detection. We show how the lines that satisfy some graphical arrangements are combined into walls. We also present the way we detect some door hypothesis thanks to the extraction of arcs. Walls and door hypothesis are then used by our room segmentation strategy; it consists in recursively decomposing the image until getting nearly convex regions. The notion of convexity is difficult to quantify, and the selection of separation lines between regions can also be rough. We take advantage of knowledge associated to architectural floor plans in order to obtain mostly rectangular rooms. Qualitative and quantitative evaluations performed on a corpus of real documents show promising results.
|
|
|
Josep Llados, Horst Bunke and Enric Marti. 1997. Using Cyclic String Matching to Find Rotational and Reflectional Symmetries in Shapes. Intelligent Robots: Sensing, Modeling and Planning. World Scientific Press, 164–179.
Abstract: Dagstuhl Workshop
|
|
|
Arka Ujal Dey, Suman Ghosh, Ernest Valveny and Gaurav Harit. 2021. Beyond Visual Semantics: Exploring the Role of Scene Text in Image Understanding. PRL, 149, 164–171.
Abstract: Images with visual and scene text content are ubiquitous in everyday life. However, current image interpretation systems are mostly limited to using only the visual features, neglecting to leverage the scene text content. In this paper, we propose to jointly use scene text and visual channels for robust semantic interpretation of images. We do not only extract and encode visual and scene text cues, but also model their interplay to generate a contextual joint embedding with richer semantics. The contextual embedding thus generated is applied to retrieval and classification tasks on multimedia images, with scene text content, to demonstrate its effectiveness. In the retrieval framework, we augment our learned text-visual semantic representation with scene text cues, to mitigate vocabulary misses that may have occurred during the semantic embedding. To deal with irrelevant or erroneous recognition of scene text, we also apply query-based attention to our text channel. We show how the multi-channel approach, involving visual semantics and scene text, improves upon state of the art.
|
|