|
Dimosthenis Karatzas, V. Poulain d'Andecy and Marçal Rusiñol. 2016. Human-Document Interaction – a new frontier for document image analysis. 12th IAPR Workshop on Document Analysis Systems.369–374.
Abstract: All indications show that paper documents will not cede in favour of their digital counterparts, but will instead be used increasingly in conjunction with digital information. An open challenge is how to seamlessly link the physical with the digital – how to continue taking advantage of the important affordances of paper, without missing out on digital functionality. This paper
presents the authors’ experience with developing systems for Human-Document Interaction based on augmented document interfaces and examines new challenges and opportunities arising for the document image analysis field in this area. The system presented combines state of the art camera-based document
image analysis techniques with a range of complementary tech-nologies to offer fluid Human-Document Interaction. Both fixed and nomadic setups are discussed that have gone through user testing in real-life environments, and use cases are presented that span the spectrum from business to educational application
|
|
|
Dimosthenis Karatzas and Ch. Lioutas. 1998. Software Package Development for Electron Diffraction Image Analysis. Proceedings of the XIV Solid State Physics National Conference.
|
|
|
E. Royer, J. Chazalon, Marçal Rusiñol and F. Bouchara. 2017. Benchmarking Keypoint Filtering Approaches for Document Image Matching. 14th International Conference on Document Analysis and Recognition.
Abstract: Best Poster Award.
Reducing the amount of keypoints used to index an image is particularly interesting to control processing time and memory usage in real-time document image matching applications, like augmented documents or smartphone applications. This paper benchmarks two keypoint selection methods on a task consisting of reducing keypoint sets extracted from document images, while preserving detection and segmentation accuracy. We first study the different forms of keypoint filtering, and we introduce the use of the CORE selection method on
keypoints extracted from document images. Then, we extend a previously published benchmark by including evaluations of the new method, by adding the SURF-BRISK detection/description scheme, and by reporting processing speeds. Evaluations are conducted on the publicly available dataset of ICDAR2015 SmartDOC challenge 1. Finally, we prove that reducing the original keypoint set is always feasible and can be beneficial
not only to processing speed but also to accuracy.
|
|
|
Ekta Vats, Anders Hast and Alicia Fornes. 2019. Training-Free and Segmentation-Free Word Spotting using Feature Matching and Query Expansion. 15th International Conference on Document Analysis and Recognition.1294–1299.
Abstract: Historical handwritten text recognition is an interesting yet challenging problem. In recent times, deep learning based methods have achieved significant performance in handwritten text recognition. However, handwriting recognition using deep learning needs training data, and often, text must be previously segmented into lines (or even words). These limitations constrain the application of HTR techniques in document collections, because training data or segmented words are not always available. Therefore, this paper proposes a training-free and segmentation-free word spotting approach that can be applied in unconstrained scenarios. The proposed word spotting framework is based on document query word expansion and relaxed feature matching algorithm, which can easily be parallelised. Since handwritten words posses distinct shape and characteristics, this work uses a combination of different keypoint detectors
and Fourier-based descriptors to obtain a sufficient degree of relaxed matching. The effectiveness of the proposed method is empirically evaluated on well-known benchmark datasets using standard evaluation measures. The use of informative features along with query expansion significantly contributed in efficient performance of the proposed method.
Keywords: Word spotting; Segmentation-free; Trainingfree; Query expansion; Feature matching
|
|
|
Emanuel Indermühle, Volkmar Frinken and Horst Bunke. 2012. Mode Detection in Online Handwritten Documents using BLSTM Neural Networks. 13th International Conference on Frontiers in Handwriting Recognition.302–307.
Abstract: Mode detection in online handwritten documents refers to the process of distinguishing different types of contents, such as text, formulas, diagrams, or tables, one from another. In this paper a new approach to mode detection is proposed that uses bidirectional long-short term memory (BLSTM) neural networks. The BLSTM neural network is a novel type of recursive neural network that has been successfully applied in speech and handwriting recognition. In this paper we show that it has the potential to significantly outperform traditional methods for mode detection, which are usually based on stroke classification. As a further advantage over previous approaches, the proposed system is trainable and does not rely on user-defined heuristics. Moreover, it can be easily adapted to new or additional types of modes by just providing the system with new training data.
|
|
|
Emanuele Vivoli, Ali Furkan Biten, Andres Mafla, Dimosthenis Karatzas and Lluis Gomez. 2022. MUST-VQA: MUltilingual Scene-text VQA. Proceedings European Conference on Computer Vision Workshops.345–358. (LNCS.)
Abstract: In this paper, we present a framework for Multilingual Scene Text Visual Question Answering that deals with new languages in a zero-shot fashion. Specifically, we consider the task of Scene Text Visual Question Answering (STVQA) in which the question can be asked in different languages and it is not necessarily aligned to the scene text language. Thus, we first introduce a natural step towards a more generalized version of STVQA: MUST-VQA. Accounting for this, we discuss two evaluation scenarios in the constrained setting, namely IID and zero-shot and we demonstrate that the models can perform on a par on a zero-shot setting. We further provide extensive experimentation and show the effectiveness of adapting multilingual language models into STVQA tasks.
Keywords: Visual question answering; Scene text; Translation robustness; Multilingual models; Zero-shot transfer; Power of language models
|
|
|
Ernest Valveny, Ricardo Toledo, Ramon Baldrich and Enric Marti. 2002. Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching. Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002.502–507.
|
|
|
Ernest Valveny and Antonio Lopez. 2003. Numeral Recognition for Quality Control of Surgical Sachets.
|
|
|
Ernest Valveny and B. Lamiroy. 2002. Automatic Generation of Browsable Technical Documents..
|
|
|
Ernest Valveny and Enric Marti. 2003. A model for image generation and symbol recognition through the deformation of lineal shapes. PRL, 24(15), 2857–2867.
Abstract: We describe a general framework for the recognition of distorted images of lineal shapes, which relies on three items: a model to represent lineal shapes and their deformations, a model for the generation of distorted binary images and the combination of both models in a common probabilistic framework, where the generation of deformations is related to an internal energy, and the generation of binary images to an external energy. Then, recognition consists in the minimization of a global energy function, performed by using the EM algorithm. This general framework has been applied to the recognition of hand-drawn lineal symbols in graphic documents.
|
|