|
Pau Riba, Adria Molina, Lluis Gomez, Oriol Ramos Terrades and Josep Llados. 2021. Learning to Rank Words: Optimizing Ranking Metrics for Word Spotting. 16th International Conference on Document Analysis and Recognition.381–395.
Abstract: In this paper, we explore and evaluate the use of ranking-based objective functions for learning simultaneously a word string and a word image encoder. We consider retrieval frameworks in which the user expects a retrieval list ranked according to a defined relevance score. In the context of a word spotting problem, the relevance score has been set according to the string edit distance from the query string. We experimentally demonstrate the competitive performance of the proposed model on query-by-string word spotting for both, handwritten and real scene word images. We also provide the results for query-by-example word spotting, although it is not the main focus of this work.
|
|
|
Y. Patel, Lluis Gomez, Marçal Rusiñol and Dimosthenis Karatzas. 2016. Dynamic Lexicon Generation for Natural Scene Images. 14th European Conference on Computer Vision Workshops.395–410.
Abstract: Many scene text understanding methods approach the endtoend recognition problem from a word-spotting perspective and take huge benet from using small per-image lexicons. Such customized lexicons are normally assumed as given and their source is rarely discussed.
In this paper we propose a method that generates contextualized lexicons
for scene images using only visual information. For this, we exploit
the correlation between visual and textual information in a dataset consisting
of images and textual content associated with them. Using the topic modeling framework to discover a set of latent topics in such a dataset allows us to re-rank a xed dictionary in a way that prioritizes the words that are more likely to appear in a given image. Moreover, we train a CNN that is able to reproduce those word rankings but using only the image raw pixels as input. We demonstrate that the quality of the automatically obtained custom lexicons is superior to a generic frequency-based baseline.
Keywords: scene text; photo OCR; scene understanding; lexicon generation; topic modeling; CNN
|
|
|
Manuel Carbonell, Mauricio Villegas, Alicia Fornes and Josep Llados. 2018. Joint Recognition of Handwritten Text and Named Entities with a Neural End-to-end Model. 13th IAPR International Workshop on Document Analysis Systems.399–404.
Abstract: When extracting information from handwritten documents, text transcription and named entity recognition are usually faced as separate subsequent tasks. This has the disadvantage that errors in the first module affect heavily the
performance of the second module. In this work we propose to do both tasks jointly, using a single neural network with a common architecture used for plain text recognition. Experimentally, the work has been tested on a collection of historical marriage records. Results of experiments are presented to show the effect on the performance for different
configurations: different ways of encoding the information, doing or not transfer learning and processing at text line or multi-line region level. The results are comparable to state of the art reported in the ICDAR 2017 Information Extraction competition, even though the proposed technique does not use any dictionaries, language modeling or post processing.
Keywords: Named entity recognition; Handwritten Text Recognition; neural networks
|
|
|
Weijia Wu and 7 others. 2023. ICDAR 2023 Competition on Video Text Reading for Dense and Small Text. 17th International Conference on Document Analysis and Recognition.405–419. (LNCS.)
Abstract: Recently, video text detection, tracking and recognition in natural scenes are becoming very popular in the computer vision community. However, most existing algorithms and benchmarks focus on common text cases (e.g., normal size, density) and single scenario, while ignore extreme video texts challenges, i.e., dense and small text in various scenarios. In this competition report, we establish a video text reading benchmark, named DSText, which focuses on dense and small text reading challenge in the video with various scenarios. Compared with the previous datasets, the proposed dataset mainly include three new challenges: 1) Dense video texts, new challenge for video text spotter. 2) High-proportioned small texts. 3) Various new scenarios, e.g., ‘Game’, ‘Sports’, etc. The proposed DSText includes 100 video clips from 12 open scenarios, supporting two tasks (i.e., video text tracking (Task 1) and end-to-end video text spotting (Task2)). During the competition period (opened on 15th February, 2023 and closed on 20th March, 2023), a total of 24 teams participated in the three proposed tasks with around 30 valid submissions, respectively. In this article, we describe detailed statistical information of the dataset, tasks, evaluation protocols and the results summaries of the ICDAR 2023 on DSText competition. Moreover, we hope the benchmark will promise the video text research in the community.
Keywords: Video Text Spotting; Small Text; Text Tracking; Dense Text
|
|
|
Francesco Brughi, Debora Gil, Llorenç Badiella, Eva Jove Casabella and Oriol Ramos Terrades. 2014. Exploring the impact of inter-query variability on the performance of retrieval systems. 11th International Conference on Image Analysis and Recognition. Springer International Publishing, 413–420. (LNCS.)
Abstract: This paper introduces a framework for evaluating the performance of information retrieval systems. Current evaluation metrics provide an average score that does not consider performance variability across the query set. In this manner, conclusions lack of any statistical significance, yielding poor inference to cases outside the query set and possibly unfair comparisons. We propose to apply statistical methods in order to obtain a more informative measure for problems in which different query classes can be identified. In this context, we assess the performance variability on two levels: overall variability across the whole query set and specific query class-related variability. To this end, we estimate confidence bands for precision-recall curves, and we apply ANOVA in order to assess the significance of the performance across different query classes.
|
|
|
Francesc Tous, Agnes Borras, Robert Benavente, Ramon Baldrich, Maria Vanrell and Josep Llados. 2002. Textual Descriptions for Browsing People by Visual Apperance. Lecture Notes in Artificial Intelligence. Springer Verlag, 419–429.
Abstract: This paper presents a first approach to build colour and structural descriptors for information retrieval on a people database. Queries are formulated in terms of their appearance that allows to seek people wearing specific clothes of a given colour name or texture. Descriptors are automatically computed by following three essential steps. A colour naming labelling from pixel properties. A region seg- mentation step based on colour properties of pixels combined with edge information. And a high level step that models the region arrangements in order to build clothes structure. Results are tested on large set of images from real scenes taken at the entrance desk of a building
|
|
|
Alicia Fornes, Beata Megyesi and Joan Mas. 2017. Transcription of Encoded Manuscripts with Image Processing Techniques. Digital Humanities Conference.441–443.
|
|
|
Joan Mas, Gemma Sanchez, Josep Llados and B. Lamiroy. 2007. An Incremental On-line Parsing Algorithm for Recognizing Sketching Diagrams. 9th IEEE International Conference on Document Analysis and Recognition.452–456.
|
|
|
Jon Almazan, David Fernandez, Alicia Fornes, Josep Llados and Ernest Valveny. 2012. A Coarse-to-Fine Approach for Handwritten Word Spotting in Large Scale Historical Documents Collection. 13th International Conference on Frontiers in Handwriting Recognition.453–458.
Abstract: In this paper we propose an approach for word spotting in handwritten document images. We state the problem from a focused retrieval perspective, i.e. locating instances of a query word in a large scale dataset of digitized manuscripts. We combine two approaches, namely one based on word segmentation and another one segmentation-free. The first approach uses a hashing strategy to coarsely prune word images that are unlikely to be instances of the query word. This process is fast but has a low precision due to the errors introduced in the segmentation step. The regions containing candidate words are sent to the second process based on a state of the art technique from the visual object detection field. This discriminative model represents the appearance of the query word and computes a similarity score. In this way we propose a coarse-to-fine approach achieving a compromise between efficiency and accuracy. The validation of the model is shown using a collection of old handwritten manuscripts. We appreciate a substantial improvement in terms of precision regarding the previous proposed method with a low computational cost increase.
|
|
|
Josep Llados and Gemma Sanchez. 2004. Graph Matching vs. Graph Parsing in Graphics Recognition: A Combined Approach.
|
|