Thanh Ha Do, Salvatore Tabbone, & Oriol Ramos Terrades. (2013). Document noise removal using sparse representations over learned dictionary. In Symposium on Document engineering (pp. 161–168).
Abstract: best paper award
In this paper, we propose an algorithm for denoising document images using sparse representations. Following a training set, this algorithm is able to learn the main document characteristics and also, the kind of noise included into the documents. In this perspective, we propose to model the noise energy based on the normalized cross-correlation between pairs of noisy and non-noisy documents. Experimental
results on several datasets demonstrate the robustness of our method compared with the state-of-the-art.
|
Simone Balocco, Carlo Gatta, Xavier Carrillo, F. Mauri, & Petia Radeva. (2011). Plaque Type, Plaque Burden and Wall Shear Stress Relation in Coronary Arteries Assessed by X-ray Angiography and Intravascular Ultrasound: a Qualitative Study. In 14th International Symposium on Applied Sciences in Biomedical and Communication Technologies.
Abstract: In this paper, we present a complete framework that automatically provides fluid-dynamic and plaque analysis from IVUS and Angiographic sequences. Such framework is used to analyze, in three coronary arteries, the relation between wall shear stress with type and amount of plaque. Preliminary qualitative results show an inverse relation between the wall shear stress and the plaque burden, which is confirmed by the fact that the plaque growth is higher on the wall having concave curvature. Regarding the plaque type it was observed that regions having low shear stress are predominantly fibro-lipidic while the heavy calcifications are in general located in areas of the vessel having high WSS.
|
David Vazquez, Antonio Lopez, Daniel Ponsa, & Javier Marin. (2011). Virtual Worlds and Active Learning for Human Detection. In 13th International Conference on Multimodal Interaction (pp. 393–400). New York, NY, USA, USA: ACM DL.
Abstract: Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.
Keywords: Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning
|
Alicia Fornes, Volkmar Frinken, Andreas Fischer, Jon Almazan, G. Jackson, & Horst Bunke. (2011). A Keyword Spotting Approach Using Blurred Shape Model-Based Descriptors. In Proceedings of the 2011 Workshop on Historical Document Imaging and Processing (pp. 83–90). ACM.
Abstract: The automatic processing of handwritten historical documents is considered a hard problem in pattern recognition. In addition to the challenges given by modern handwritten data, a lack of training data as well as effects caused by the degradation of documents can be observed. In this scenario, keyword spotting arises to be a viable solution to make documents amenable for searching and browsing. For this task we propose the adaptation of shape descriptors used in symbol recognition. By treating each word image as a shape, it can be represented using the Blurred Shape Model and the De-formable Blurred Shape Model. Experiments on the George Washington database demonstrate that this approach is able to outperform the commonly used Dynamic Time Warping approach.
|
Andreas Fischer, Volkmar Frinken, Alicia Fornes, & Horst Bunke. (2011). Transcription Alignment of Latin Manuscripts Using Hidden Markov Models. In Proceedings of the 2011 Workshop on Historical Document Imaging and Processing (pp. 29–36). ACM.
Abstract: Transcriptions of historical documents are a valuable source for extracting labeled handwriting images that can be used for training recognition systems. In this paper, we introduce the Saint Gall database that includes images as well as the transcription of a Latin manuscript from the 9th century written in Carolingian script. Although the available transcription is of high quality for a human reader, the spelling of the words is not accurate when compared with the handwriting image. Hence, the transcription poses several challenges for alignment regarding, e.g., line breaks, abbreviations, and capitalization. We propose an alignment system based on character Hidden Markov Models that can cope with these challenges and efficiently aligns complete document pages. On the Saint Gall database, we demonstrate that a considerable alignment accuracy can be achieved, even with weakly trained character models.
|
Oriol Ramos Terrades, Alejandro Hector Toselli, Nicolas Serrano, Veronica Romero, Enrique Vidal, & Alfons Juan. (2010). Interactive layout analysis and transcription systems for historic handwritten documents. In 10th ACM Symposium on Document Engineering (219–222).
Abstract: The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process.
Keywords: Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis
|
Albert Gordo, Jaume Gibert, Ernest Valveny, & Marçal Rusiñol. (2010). A Kernel-based Approach to Document Retrieval. In 9th IAPR International Workshop on Document Analysis Systems (377–384).
Abstract: In this paper we tackle the problem of document image retrieval by combining a similarity measure between documents and the probability that a given document belongs to a certain class. The membership probability to a specific class is computed using Support Vector Machines in conjunction with similarity measure based kernel applied to structural document representations. In the presented experiments, we use different document representations, both visual and structural, and we apply them to a database of historical documents. We show how our method based on similarity kernels outperforms the usual distance-based retrieval.
|
Farshad Nourbakhsh, Dimosthenis Karatzas, & Ernest Valveny. (2010). A polar-based logo representation based on topological and colour features. In 9th IAPR International Workshop on Document Analysis Systems (341–348).
Abstract: In this paper, we propose a novel rotation and scale invariant method for colour logo retrieval and classification, which involves performing a simple colour segmentation and subsequently describing each of the resultant colour components based on a set of topological and colour features. A polar representation is used to represent the logo and the subsequent logo matching is based on Cyclic Dynamic Time Warping (CDTW). We also show how combining information about the global distribution of the logo components and their local neighbourhood using the Delaunay triangulation allows to improve the results. All experiments are performed on a dataset of 2500 instances of 100 colour logo images in different rotations and scales.
|
Albert Gordo, Alicia Fornes, Ernest Valveny, & Josep Llados. (2010). A Bag of Notes Approach to Writer Identification in Old Handwritten Music Scores. In 9th IAPR International Workshop on Document Analysis Systems (247–254).
Abstract: Determining the authorship of a document, namely writer identification, can be an important source of information for document categorization. Contrary to text documents, the identification of the writer of graphical documents is still a challenge. In this paper we present a robust approach for writer identification in a particular kind of graphical documents, old music scores. This approach adapts the bag of visual terms method for coping with graphic documents. The identification is performed only using the graphical music notation. For this purpose, we generate a graphic vocabulary without recognizing any music symbols, and consequently, avoiding the difficulties in the recognition of hand-drawn symbols in old and degraded documents. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving very high identification rates.
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2010). Query Driven Word Retrieval in Graphical Documents. In 9th IAPR International Workshop on Document Analysis Systems (191–198).
Abstract: In this paper, we present an approach towards the retrieval of words from graphical document images. In graphical documents, due to presence of multi-oriented characters in non-structured layout, word indexing is a challenging task. The proposed approach uses recognition results of individual components to form character pairs with the neighboring components. An indexing scheme is designed to store the spatial description of components and to access them efficiently. Given a query text word (ascii/unicode format), the character pairs present in it are searched in the document. Next the retrieved character pairs are linked sequentially to form character string. Dynamic programming is applied to find different instances of query words. A string edit distance is used here to match the query word as the objective function. Recognition of multi-scale and multi-oriented character component is done using Support Vector Machine classifier. To consider multi-oriented character strings the features used in the SVM are invariant to character orientation. Experimental results show that the method is efficient to locate a query word from multi-oriented text in graphical documents.
|
Sebastien Mace, Herve Locteau, Ernest Valveny, & Salvatore Tabbone. (2010). A system to detect rooms in architectural floor plan images. In 9th IAPR International Workshop on Document Analysis Systems (167–174).
Abstract: In this article, a system to detect rooms in architectural floor plan images is described. We first present a primitive extraction algorithm for line detection. It is based on an original coupling of classical Hough transform with image vectorization in order to perform robust and efficient line detection. We show how the lines that satisfy some graphical arrangements are combined into walls. We also present the way we detect some door hypothesis thanks to the extraction of arcs. Walls and door hypothesis are then used by our room segmentation strategy; it consists in recursively decomposing the image until getting nearly convex regions. The notion of convexity is difficult to quantify, and the selection of separation lines between regions can also be rough. We take advantage of knowledge associated to architectural floor plans in order to obtain mostly rectangular rooms. Qualitative and quantitative evaluations performed on a corpus of real documents show promising results.
|
Antonio Clavelli, Dimosthenis Karatzas, & Josep Llados. (2010). A framework for the assessment of text extraction algorithms on complex colour images. In 9th IAPR International Workshop on Document Analysis Systems (19–26).
Abstract: The availability of open, ground-truthed datasets and clear performance metrics is a crucial factor in the development of an application domain. The domain of colour text image analysis (real scenes, Web and spam images, scanned colour documents) has traditionally suffered from a lack of a comprehensive performance evaluation framework. Such a framework is extremely difficult to specify, and corresponding pixel-level accurate information tedious to define. In this paper we discuss the challenges and technical issues associated with developing such a framework. Then, we describe a complete framework for the evaluation of text extraction methods at multiple levels, provide a detailed ground-truth specification and present a case study on how this framework can be used in a real-life situation.
|
S. Chanda, Umapada Pal, & Oriol Ramos Terrades. (2009). Word-Wise Thai and Roman Script Identification. TALIP - ACM Transactions on Asian Language Information Processing, 1–21.
Abstract: In some Thai documents, a single text line of a printed document page may contain words of both Thai and Roman scripts. For the Optical Character Recognition (OCR) of such a document page it is better to identify, at first, Thai and Roman script portions and then to use individual OCR systems of the respective scripts on these identified portions. In this article, an SVM-based method is proposed for identification of word-wise printed Roman and Thai scripts from a single line of a document page. Here, at first, the document is segmented into lines and then lines are segmented into character groups (words). In the proposed scheme, we identify the script of a character group combining different character features obtained from structural shape, profile behavior, component overlapping information, topological properties, and water reservoir concept, etc. Based on the experiment on 10,000 data (words) we obtained 99.62% script identification accuracy from the proposed scheme.
|
Oriol Pujol, & Petia Radeva. (2004). Texture Segmentation by Statistical Deformable Models. IJIG - International Journal of Image and Graphics, 433–452.
Abstract: Deformable models have received much popularity due to their ability to include high-level knowledge on the application domain into low-level image processing. Still, most proposed active contour models do not sufficiently profit from the application information and they are too generalized, leading to non-optimal final results of segmentation, tracking or 3D reconstruction processes. In this paper we propose a new deformable model defined in a statistical framework to segment objects of natural scenes. We perform a supervised learning of local appearance of the textured objects and construct a feature space using a set of co-occurrence matrix measures. Linear Discriminant Analysis allows us to obtain an optimal reduced feature space where a mixture model is applied to construct a likelihood map. Instead of using a heuristic potential field, our active model is deformed on a regularized version of the likelihood map in order to segment objects characterized by the same texture pattern. Different tests on synthetic images, natural scene and medical images show the advantages of our statistic deformable model.
Keywords: Texture segmentation, parametric active contours, statistic snakes
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2013). Embedding of Graphs with Discrete Attributes Via Label Frequencies. IJPRAI - International Journal of Pattern Recognition and Artificial Intelligence, 27(3), 1360002–1360029.
Abstract: Graph-based representations of patterns are very flexible and powerful, but they are not easily processed due to the lack of learning algorithms in the domain of graphs. Embedding a graph into a vector space solves this problem since graphs are turned into feature vectors and thus all the statistical learning machinery becomes available for graph input patterns. In this work we present a new way of embedding discrete attributed graphs into vector spaces using node and edge label frequencies. The methodology is experimentally tested on graph classification problems, using patterns of different nature, and it is shown to be competitive to state-of-the-art classification algorithms for graphs, while being computationally much more efficient.
Keywords: Discrete attributed graphs; graph embedding; graph classification
|