|
P. Wang, V. Eglin, C. Garcia, C. Largeron, Josep Llados and Alicia Fornes. 2014. A Novel Learning-free Word Spotting Approach Based on Graph Representation. 11th IAPR International Workshop on Document Analysis and Systems.207–211.
Abstract: Effective information retrieval on handwritten document images has always been a challenging task. In this paper, we propose a novel handwritten word spotting approach based on graph representation. The presented model comprises both topological and morphological signatures of handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. In order to be robust to the handwriting variations, an exhaustive merging process based on DTW alignment result is introduced in the similarity measure between word images. With respect to the computation complexity, an approximate graph edit distance approach using bipartite matching is employed for graph matching. The experiments on the George Washington dataset and the marriage records from the Barcelona Cathedral dataset demonstrate that the proposed approach outperforms the state-of-the-art structural methods.
|
|
|
Marçal Rusiñol, J. Chazalon and Jean-Marc Ogier. 2014. Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images. 11th IAPR International Workshop on Document Analysis and Systems.181–185.
Abstract: Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given.
|
|
|
Marçal Rusiñol, J. Chazalon and Jean-Marc Ogier. 2016. Filtrage de descripteurs locaux pour l'amélioration de la détection de documents. Colloque International Francophone sur l'Écrit et le Document.
Abstract: In this paper we propose an effective method aimed at reducing the amount of local descriptors to be indexed in a document matching framework.In an off-line training stage, the matching between the model document and incoming images is computed retaining the local descriptors from the model that steadily produce good matches. We have evaluated this approach by using the ICDAR2015 SmartDOC dataset containing near 25000 images from documents to be captured by a mobile device. We have tested the performance of this filtering step by using ORB and SIFT local detectors and descriptors. The results show an important gain both in quality of the final matching as well as in time and space requirements.
Keywords: Local descriptors; mobile capture; document matching; keypoint selection
|
|
|
Stepan Simsa and 10 others. 2023. Overview of DocILE 2023: Document Information Localization and Extraction. International Conference of the Cross-Language Evaluation Forum for European Languages.276–293. (LNCS.)
Abstract: This paper provides an overview of the DocILE 2023 Competition, its tasks, participant submissions, the competition results and possible future research directions. This first edition of the competition focused on two Information Extraction tasks, Key Information Localization and Extraction (KILE) and Line Item Recognition (LIR). Both of these tasks require detection of pre-defined categories of information in business documents. The second task additionally requires correctly grouping the information into tuples, capturing the structure laid out in the document. The competition used the recently published DocILE dataset and benchmark that stays open to new submissions. The diversity of the participant solutions indicates the potential of the dataset as the submissions included pure Computer Vision, pure Natural Language Processing, as well as multi-modal solutions and utilized all of the parts of the dataset, including the annotated, synthetic and unlabeled subsets.
Keywords: Information Extraction; Computer Vision; Natural Language Processing; Optical Character Recognition; Document Understanding
|
|
|
Fernando Vilariño. 2019. Public Libraries Exploring how technology transforms the cultural experience of people. Workshop on Social Impact of AI. Open Living Lab Days Conference..
|
|
|
Emanuele Vivoli, Ali Furkan Biten, Andres Mafla, Dimosthenis Karatzas and Lluis Gomez. 2022. MUST-VQA: MUltilingual Scene-text VQA. Proceedings European Conference on Computer Vision Workshops.345–358. (LNCS.)
Abstract: In this paper, we present a framework for Multilingual Scene Text Visual Question Answering that deals with new languages in a zero-shot fashion. Specifically, we consider the task of Scene Text Visual Question Answering (STVQA) in which the question can be asked in different languages and it is not necessarily aligned to the scene text language. Thus, we first introduce a natural step towards a more generalized version of STVQA: MUST-VQA. Accounting for this, we discuss two evaluation scenarios in the constrained setting, namely IID and zero-shot and we demonstrate that the models can perform on a par on a zero-shot setting. We further provide extensive experimentation and show the effectiveness of adapting multilingual language models into STVQA tasks.
Keywords: Visual question answering; Scene text; Translation robustness; Multilingual models; Zero-shot transfer; Power of language models
|
|
|
Sergi Garcia Bordils and 7 others. 2022. Out-of-Vocabulary Challenge Report. Proceedings European Conference on Computer Vision Workshops.359–375. (LNCS.)
Abstract: This paper presents final results of the Out-Of-Vocabulary 2022 (OOV) challenge. The OOV contest introduces an important aspect that is not commonly studied by Optical Character Recognition (OCR) models, namely, the recognition of unseen scene text instances at training time. The competition compiles a collection of public scene text datasets comprising of 326,385 images with 4,864,405 scene text instances, thus covering a wide range of data distributions. A new and independent validation and test set is formed with scene text instances that are out of vocabulary at training time. The competition was structured in two tasks, end-to-end and cropped scene text recognition respectively. A thorough analysis of results from baselines and different participants is presented. Interestingly, current state-of-the-art models show a significant performance gap under the newly studied setting. We conclude that the OOV dataset proposed in this challenge will be an essential area to be explored in order to develop scene text models that achieve more robust and generalized predictions.
|
|
|
Carles Sanchez, Oriol Ramos Terrades, Patricia Marquez, Enric Marti, Jaume Rocarias and Debora Gil. 2014. Evaluación automática de prácticas en Moodle para el aprendizaje autónomo en Ingenierías.
|
|
|
Miquel Ferrer, Ernest Valveny, F. Serratosa, K. Riesen and Horst Bunke. 2008. An Approximate Algorith for Median Graph Computation using Graph Embedding. 19th International Conference on Pattern Recognition..
|
|
|
Dimosthenis Karatzas, Marçal Rusiñol, Coen Antens and Miquel Ferrer. 2008. Segmentation Robust to the Vignette Effect for Machine Vision Systems. 19th International Conference on Pattern Recognition.
Abstract: The vignette effect (radial fall-off) is commonly encountered in images obtained through certain image acquisition setups and can seriously hinder automatic analysis processes. In this paper we present a fast and efficient method for dealing with vignetting in the context of object segmentation in an existing industrial inspection setup. The vignette effect is modelled here as a circular, non-linear gradient. The method estimates the gradient parameters and employs them to perform segmentation. Segmentation results on a variety of images indicate that the presented method is able to successfully tackle the vignette effect.
|
|