|
Albert Gordo, & Ernest Valveny. (2009). A rotation invariant page layout descriptor for document classification and retrieval. In 10th International Conference on Document Analysis and Recognition (481–485).
Abstract: Document classification usually requires of structural features such as the physical layout to obtain good accuracy rates on complex documents. This paper introduces a descriptor of the layout and a distance measure based on the cyclic dynamic time warping which can be computed in O(n2). This descriptor is translation invariant and can be easily modified to be scale and rotation invariant. Experiments with this descriptor and its rotation invariant modification are performed on the Girona archives database and compared against another common layout distance, the minimum weight edge cover. The experiments show that these methods outperform the MWEC both in accuracy and speed, particularly on rotated documents.
|
|
|
Marçal Rusiñol, & Josep Llados. (2009). Logo Spotting by a Bag-of-words Approach for Document Categorization. In 10th International Conference on Document Analysis and Recognition (111–115).
Abstract: In this paper we present a method for document categorization which processes incoming document images such as invoices or receipts. The categorization of these document images is done in terms of the presence of a certain graphical logo detected without segmentation. The graphical logos are described by a set of local features and the categorization of the documents is performed by the use of a bag-of-words model. Spatial coherence rules are added to reinforce the correct category hypothesis, aiming also to spot the logo inside the document image. Experiments which demonstrate the effectiveness of this system on a large set of real data are presented.
|
|
|
Ricard Coll, Alicia Fornes, & Josep Llados. (2009). Graphological Analysis of Handwritten Text Documents for Human Resources Recruitment. In 10th International Conference on Document Analysis and Recognition (1081–1085).
Abstract: The use of graphology in recruitment processes has become a popular tool in many human resources companies. This paper presents a model that links features from handwritten images to a number of personality characteristics used to measure applicant aptitudes for the job in a particular hiring scenario. In particular we propose a model of measuring active personality and leadership of the writer. Graphological features that define such a profile are measured in terms of document and script attributes like layout configuration, letter size, shape, slant and skew angle of lines, etc. After the extraction, data is classified using a neural network. An experimental framework with real samples has been constructed to illustrate the performance of the approach.
|
|
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Horst Bunke. (2009). On the use of textural features for writer identification in old handwritten music scores. In 10th International Conference on Document Analysis and Recognition (pp. 996–1000).
Abstract: Writer identification consists in determining the writer of a piece of handwriting from a set of writers. In this paper we present a system for writer identification in old handwritten music scores which uses only music notation to determine the author. The steps of the proposed system are the following. First of all, the music sheet is preprocessed for obtaining a music score without the staff lines. Afterwards, four different methods for generating texture images from music symbols are applied. Every approach uses a different spatial variation when combining the music symbols to generate the textures. Finally, Gabor filters and Grey-scale Co-ocurrence matrices are used to obtain the features. The classification is performed using a k-NN classifier based on Euclidean distance. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving encouraging identification rates.
|
|
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2009). Seal detection and recognition: An approach for document indexing. In 10th International Conference on Document Analysis and Recognition (101–105).
Abstract: Reliable indexing of documents having seal instances can be achieved by recognizing seal information. This paper presents a novel approach for detecting and classifying such multi-oriented seals in these documents. First, Hough Transform based methods are applied to extract the seal regions in documents. Next, isolated text characters within these regions are detected. Rotation and size invariant features and a support vector machine based classifier have been used to recognize these detected text characters. Next, for each pair of character, we encode their relative spatial organization using their distance and angular position with respect to the centre of the seal, and enter this code into a hash table. Given an input seal, we recognize the individual text characters and compute the code for pair-wise character based on the relative spatial organization. The code obtained from the input seal helps to retrieve model hypothesis from the hash table. The seal model to which we get maximum hypothesis is selected for the recognition of the input seal. The methodology is tested to index seal in rotation and size invariant environment and we obtained encouraging results.
|
|
|
Partha Pratim Roy, Umapada Pal, Josep Llados, & Mathieu Nicolas Delalandre. (2009). Multi-Oriented and Multi-Sized Touching Character Segmentation using Dynamic Programming. In 10th International Conference on Document Analysis and Recognition (11–15).
Abstract: In this paper, we present a scheme towards the segmentation of English multi-oriented touching strings into individual characters. When two or more characters touch, they generate a big cavity region at the background portion. Using Convex Hull information, we use these background information to find some initial points to segment a touching string into possible primitive segments (a primitive segment consists of a single character or a part of a character). Next these primitive segments are merged to get optimum segmentation and dynamic programming is applied using total likelihood of characters as the objective function. SVM classifier is used to find the likelihood of a character. To consider multi-oriented touching strings the features used in the SVM are invariant to character orientation. Circular ring and convex hull ring based approach has been used along with angular information of the contour pixels of the character to make the feature rotation invariant. From the experiment, we obtained encouraging results.
|
|
|
D. Perez, L. Tarazon, N. Serrano, F.M. Castro, Oriol Ramos Terrades, & A. Juan. (2009). The GERMANA Database. In 10th International Conference on Document Analysis and Recognition (pp. 301–305).
Abstract: A new handwritten text database, GERMANA, is presented to facilitate empirical comparison of different approaches to text line extraction and off-line handwriting recognition. GERMANA is the result of digitising and annotating a 764-page Spanish manuscript from 1891, in which most pages only contain nearly calligraphed text written on ruled sheets of well-separated lines. To our knowledge, it is the first publicly available database for handwriting research, mostly written in Spanish and comparable in size to standard databases. Due to its sequential book structure, it is also well-suited for realistic assessment of interactive handwriting recognition systems. To provide baseline results for reference in future studies, empirical results are also reported, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling.
|
|
|
Gioacchino Vino, & Angel Sappa. (2013). Revisiting Harris Corner Detector Algorithm: a Gradual Thresholding Approach. In 10th International Conference on Image Analysis and Recognition (Vol. 7950, pp. 354–363). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents an adaptive thresholding approach intended to increase the number of detected corners, while reducing the amount of those ones corresponding to noisy data. The proposed approach works by using the classical Harris corner detector algorithm and overcome the difficulty in finding a general threshold that work well for all the images in a given data set by proposing a novel adaptive thresholding scheme. Initially, two thresholds are used to discern between strong corners and flat regions. Then, a region based criteria is used to discriminate between weak corners and noisy points in the midway interval. Experimental results show that the proposed approach has a better capability to reject false corners and, at the same time, to detect weak ones. Comparisons with the state of the art are provided showing the validity of the proposed approach.
|
|
|
Yaxing Wang, Joost Van de Weijer, Lu Yu, & Shangling Jui. (2022). Distilling GANs with Style-Mixed Triplets for X2I Translation with Limited Data. In 10th International Conference on Learning Representations.
Abstract: Conditional image synthesis is an integral part of many X2I translation systems, including image-to-image, text-to-image and audio-to-image translation systems. Training these large systems generally requires huge amounts of training data.
Therefore, we investigate knowledge distillation to transfer knowledge from a high-quality unconditioned generative model (e.g., StyleGAN) to a conditioned synthetic image generation modules in a variety of systems. To initialize the conditional and reference branch (from a unconditional GAN) we exploit the style mixing characteristics of high-quality GANs to generate an infinite supply of style-mixed triplets to perform the knowledge distillation. Extensive experimental results in a number of image generation tasks (i.e., image-to-image, semantic segmentation-to-image, text-to-image and audio-to-image) demonstrate qualitatively and quantitatively that our method successfully transfers knowledge to the synthetic image generation modules, resulting in more realistic images than previous methods as confirmed by a significant drop in the FID.
|
|
|
Jaume Gibert, Ernest Valveny, Oriol Ramos Terrades, & Horst Bunke. (2011). Multiple Classifiers for Graph of Words Embedding. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International Conference on Multiple Classifier Systems (Vol. 6713, pp. 36–45). LNCS.
Abstract: During the last years, there has been an increasing interest in applying the multiple classifier framework to the domain of structural pattern recognition. Constructing base classifiers when the input patterns are graph based representations is not an easy problem. In this work, we make use of the graph embedding methodology in order to construct different feature vector representations for graphs. The graph of words embedding assigns a feature vector to every graph by counting unary and binary relations between node representatives and combining these pieces of information into a single vector. Selecting different node representatives leads to different vectorial representations and therefore to different base classifiers that can be combined. We experimentally show how this methodology significantly improves the classification of graphs with respect to single base classifiers.
|
|
|
Miguel Angel Bautista, Oriol Pujol, Xavier Baro, & Sergio Escalera. (2011). Introducing the Separability Matrix for Error Correcting Output Codes Coding. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International conference on Multiple Classifier Systems (Vol. 6713, pp. 227–236). LNCS. Springer-Verlag Berlin Heidelberg.
Abstract: Error Correcting Output Codes (ECOC) have demonstrate to be a powerful tool for treating multi-class problems. Nevertheless, predefined ECOC designs may not benefit from Error-correcting principles for particular multi-class data. In this paper, we introduce the Separability matrix as a tool to study and enhance designs for ECOC coding. In addition, a novel problem-dependent coding design based on the Separability matrix is tested over a wide set of challenging multi-class problems, obtaining very satisfactory results.
|
|
|
Eloi Puertas, Sergio Escalera, & Oriol Pujol. (2011). Multi-Class Multi-Scale Stacked Sequential Learning. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International Conference on Multiple Classifier Systems (Vol. 6713, pp. 197–206). Springer.
|
|
|
Miguel Angel Bautista, Oriol Pujol, Xavier Baro, & Sergio Escalera. (2011). Introducing the Separability Matrix for Error Correcting Output Codes Coding. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International Conference on Multiple Classifier Systems (Vol. 6713, pp. 227–236). LNCS. Springer-Verlag Berlin, Heidelberg.
Abstract: Error Correcting Output Codes (ECOC) have demonstrate to be a powerful tool for treating multi-class problems. Nevertheless, predefined ECOC designs may not benefit from Error-correcting principles for particular multi-class data. In this paper, we introduce the Separability matrix as a tool to study and enhance designs for ECOC coding. In addition, a novel problem-dependent coding design based on the Separability matrix is tested over a wide set of challenging multi-class problems, obtaining very satisfactory results.
|
|
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2011). Approximate Convex Hulls Family for One-Class Cassification. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International Workshop on Multiple Classifier Systems (Vol. 6713, pp. 106–115). LNCS. Springer Berlin Heidelberg.
Abstract: In this work, a new method for one-class classification based on the Convex Hull geometric structure is proposed. The new method creates a family of convex hulls able to fit the geometrical shape of the training points. The increased computational cost due to the creation of the convex hull in multiple dimensions is circumvented using random projections. This provides an approximation of the original structure with multiple bi-dimensional views. In the projection planes, a mechanism for noisy points rejection has also been elaborated and evaluated. Results show that the approach performs considerably well with respect to the state the art in one-class classification.
|
|
|
N. Serrano, L. Tarazon, D. Perez, Oriol Ramos Terrades, & S. Juan. (2010). The GIDOC Prototype. In 10th International Workshop on Pattern Recognition in Information Systems (pp. 82–89).
Abstract: Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription.
GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions.
|
|