|
Andres Mafla, Rafael S. Rezende, Lluis Gomez, Diana Larlus, & Dimosthenis Karatzas. (2021). StacMR: Scene-Text Aware Cross-Modal Retrieval. In IEEE Winter Conference on Applications of Computer Vision (pp. 2219–2229).
|
|
|
Lluis Gomez, Anguelos Nicolaou, Marçal Rusiñol, & Dimosthenis Karatzas. (2020). 12 years of ICDAR Robust Reading Competitions: The evolution of reading systems for unconstrained text understanding. In K. Alahari, & C.V. Jawahar (Eds.), Visual Text Interpretation – Algorithms and Applications in Scene Understanding and Document Analysis. Series on Advances in Computer Vision and Pattern Recognition. Springer.
|
|
|
Lluis Gomez, Dena Bazazian, & Dimosthenis Karatzas. (2020). Historical review of scene text detection research. In K. Alahari, & C.V. Jawahar (Eds.), Visual Text Interpretation – Algorithms and Applications in Scene Understanding and Document Analysis. Series on Advances in Computer Vision and Pattern Recognition. Springer.
|
|
|
Jon Almazan, Lluis Gomez, Suman Ghosh, Ernest Valveny, & Dimosthenis Karatzas. (2020). WATTS: A common representation of word images and strings using embedded attributes for text recognition and retrieval. In K. A. Analysis”, & C.V. Jawahar (Eds.), Visual Text Interpretation – Algorithms and Applications in Scene Understanding and Document Analysis. Series on Advances in Computer Vision and Pattern Recognition. Springer.
|
|
|
Raul Gomez, Yahui Liu, Marco de Nadai, Dimosthenis Karatzas, Bruno Lepri, & Nicu Sebe. (2020). Retrieval Guided Unsupervised Multi-domain Image to Image Translation. In 28th ACM International Conference on Multimedia.
Abstract: Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.
|
|
|
Minesh Mathew, Dimosthenis Karatzas, & C.V. Jawahar. (2021). DocVQA: A Dataset for VQA on Document Images. In IEEE Winter Conference on Applications of Computer Vision (pp. 2200–2209).
Abstract: We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa. org
|
|
|
Manuel Carbonell, Pau Riba, Mauricio Villegas, Alicia Fornes, & Josep Llados. (2020). Named Entity Recognition and Relation Extraction with Graph Neural Networks in Semi Structured Documents. In 25th International Conference on Pattern Recognition.
Abstract: The use of administrative documents to communicate and leave record of business information requires of methods
able to automatically extract and understand the content from
such documents in a robust and efficient way. In addition,
the semi-structured nature of these reports is specially suited
for the use of graph-based representations which are flexible
enough to adapt to the deformations from the different document
templates. Moreover, Graph Neural Networks provide the proper
methodology to learn relations among the data elements in
these documents. In this work we study the use of Graph
Neural Network architectures to tackle the problem of entity
recognition and relation extraction in semi-structured documents.
Our approach achieves state of the art results in the three
tasks involved in the process. Additionally, the experimentation
with two datasets of different nature demonstrates the good
generalization ability of our approach.
|
|
|
Minesh Mathew, Ruben Tito, Dimosthenis Karatzas, R.Manmatha, & C.V. Jawahar. (2020). Document Visual Question Answering Challenge 2020. In 33rd IEEE Conference on Computer Vision and Pattern Recognition – Short paper.
Abstract: This paper presents results of Document Visual Question Answering Challenge organized as part of “Text and Documents in the Deep Learning Era” workshop, in CVPR 2020. The challenge introduces a new problem – Visual Question Answering on document images. The challenge comprised two tasks. The first task concerns with asking questions on a single document image. On the other hand, the second task is set as a retrieval task where the question is posed over a collection of images. For the task 1 a new dataset is introduced comprising 50,000 questions-answer(s) pairs defined over 12,767 document images. For task 2 another dataset has been created comprising 20 questions over 14,362 document images which share the same document template.
|
|
|
Minesh Mathew, Lluis Gomez, Dimosthenis Karatzas, & C.V. Jawahar. (2021). Asking questions on handwritten document collections. IJDAR - International Journal on Document Analysis and Recognition, 24, 235–249.
Abstract: This work addresses the problem of Question Answering (QA) on handwritten document collections. Unlike typical QA and Visual Question Answering (VQA) formulations where the answer is a short text, we aim to locate a document snippet where the answer lies. The proposed approach works without recognizing the text in the documents. We argue that the recognition-free approach is suitable for handwritten documents and historical collections where robust text recognition is often difficult. At the same time, for human users, document image snippets containing answers act as a valid alternative to textual answers. The proposed approach uses an off-the-shelf deep embedding network which can project both textual words and word images into a common sub-space. This embedding bridges the textual and visual domains and helps us retrieve document snippets that potentially answer a question. We evaluate results of the proposed approach on two new datasets: (i) HW-SQuAD: a synthetic, handwritten document image counterpart of SQuAD1.0 dataset and (ii) BenthamQA: a smaller set of QA pairs defined on documents from the popular Bentham manuscripts collection. We also present a thorough analysis of the proposed recognition-free approach compared to a recognition-based approach which uses text recognized from the images using an OCR. Datasets presented in this work are available to download at docvqa.org.
|
|
|
Ruben Tito, Dimosthenis Karatzas, & Ernest Valveny. (2021). Document Collection Visual Question Answering. In 16th International Conference on Document Analysis and Recognition (Vol. 12822, pp. 778–792). LNCS.
Abstract: Current tasks and methods in Document Understanding aims to process documents as single elements. However, documents are usually organized in collections (historical records, purchase invoices), that provide context useful for their interpretation. To address this problem, we introduce Document Collection Visual Question Answering (DocCVQA) a new dataset and related task, where questions are posed over a whole collection of document images and the goal is not only to provide the answer to the given question, but also to retrieve the set of documents that contain the information needed to infer the answer. Along with the dataset we propose a new evaluation metric and baselines which provide further insights to the new dataset and task.
Keywords: Document collection; Visual Question Answering
|
|
|
Ruben Tito, Minesh Mathew, C.V. Jawahar, Ernest Valveny, & Dimosthenis Karatzas. (2021). ICDAR 2021 Competition on Document Visual Question Answering. In 16th International Conference on Document Analysis and Recognition (pp. 635–649).
Abstract: In this report we present results of the ICDAR 2021 edition of the Document Visual Question Challenges. This edition complements the previous tasks on Single Document VQA and Document Collection VQA with a newly introduced on Infographics VQA. Infographics VQA is based on a new dataset of more than 5, 000 infographics images and 30, 000 question-answer pairs. The winner methods have scored 0.6120 ANLS in Infographics VQA task, 0.7743 ANLSL in Document Collection VQA task and 0.8705 ANLS in Single Document VQA. We present a summary of the datasets used for each task, description of each of the submitted methods and the results and analysis of their performance. A summary of the progress made on Single Document VQA since the first edition of the DocVQA 2020 challenge is also presented.
|
|
|
Pau Riba, Sounak Dey, Ali Furkan Biten, & Josep Llados. (2021). Localizing Infinity-shaped fishes: Sketch-guided object localization in the wild.
Abstract: This work investigates the problem of sketch-guided object localization (SGOL), where human sketches are used as queries to conduct the object localization in natural images. In this cross-modal setting, we first contribute with a tough-to-beat baseline that without any specific SGOL training is able to outperform the previous works on a fixed set of classes. The baseline is useful to analyze the performance of SGOL approaches based on available simple yet powerful methods. We advance prior arts by proposing a sketch-conditioned DETR (DEtection TRansformer) architecture which avoids a hard classification and alleviates the domain gap between sketches and images to localize object instances. Although the main goal of SGOL is focused on object detection, we explored its natural extension to sketch-guided instance segmentation. This novel task allows to move towards identifying the objects at pixel level, which is of key importance in several applications. We experimentally demonstrate that our model and its variants significantly advance over previous state-of-the-art results. All training and testing code of our model will be released to facilitate future researchhttps://github.com/priba/sgol_wild.
|
|
|
Albert Suso, Pau Riba, Oriol Ramos Terrades, & Josep Llados. (2021). A Self-supervised Inverse Graphics Approach for Sketch Parametrization. In 16th International Conference on Document Analysis and Recognition (Vol. 12916, pp. 28–42). LNCS.
Abstract: The study of neural generative models of handwritten text and human sketches is a hot topic in the computer vision field. The landmark SketchRNN provided a breakthrough by sequentially generating sketches as a sequence of waypoints, and more recent articles have managed to generate fully vector sketches by coding the strokes as Bézier curves. However, the previous attempts with this approach need them all a ground truth consisting in the sequence of points that make up each stroke, which seriously limits the datasets the model is able to train in. In this work, we present a self-supervised end-to-end inverse graphics approach that learns to embed each image to its best fit of Bézier curves. The self-supervised nature of the training process allows us to train the model in a wider range of datasets, but also to perform better after-training predictions by applying an overfitting process on the input binary image. We report qualitative an quantitative evaluations on the MNIST and the Quick, Draw! datasets.
|
|
|
Manuel Carbonell, Mauricio Villegas, Alicia Fornes, & Josep Llados. (2018). Joint Recognition of Handwritten Text and Named Entities with a Neural End-to-end Model. In 13th IAPR International Workshop on Document Analysis Systems (pp. 399–404).
Abstract: When extracting information from handwritten documents, text transcription and named entity recognition are usually faced as separate subsequent tasks. This has the disadvantage that errors in the first module affect heavily the
performance of the second module. In this work we propose to do both tasks jointly, using a single neural network with a common architecture used for plain text recognition. Experimentally, the work has been tested on a collection of historical marriage records. Results of experiments are presented to show the effect on the performance for different
configurations: different ways of encoding the information, doing or not transfer learning and processing at text line or multi-line region level. The results are comparable to state of the art reported in the ICDAR 2017 Information Extraction competition, even though the proposed technique does not use any dictionaries, language modeling or post processing.
Keywords: Named entity recognition; Handwritten Text Recognition; neural networks
|
|
|
Pau Riba, Andreas Fischer, Josep Llados, & Alicia Fornes. (2018). Learning Graph Distances with Message Passing Neural Networks. In 24th International Conference on Pattern Recognition (pp. 2239–2244).
Abstract: Graph representations have been widely used in pattern recognition thanks to their powerful representation formalism and rich theoretical background. A number of error-tolerant graph matching algorithms such as graph edit distance have been proposed for computing a distance between two labelled graphs. However, they typically suffer from a high
computational complexity, which makes it difficult to apply
these matching algorithms in a real scenario. In this paper, we propose an efficient graph distance based on the emerging field of geometric deep learning. Our method employs a message passing neural network to capture the graph structure and learns a metric with a siamese network approach. The performance of the proposed graph distance is validated in two application cases, graph classification and graph retrieval of handwritten words, and shows a promising performance when compared with
(approximate) graph edit distance benchmarks.
Keywords: ★Best Paper Award★
|
|