2022 |
|
Pau Torras, Arnau Baro, Alicia Fornes and Lei Kang. 2022. Improving Handwritten Music Recognition through Language Model Integration. 4th International Workshop on Reading Music Systems (WoRMS2022).42–46.
Abstract: Handwritten Music Recognition, especially in the historical domain, is an inherently challenging endeavour; paper degradation artefacts and the ambiguous nature of handwriting make recognising such scores an error-prone process, even for the current state-of-the-art Sequence to Sequence models. In this work we propose a way of reducing the production of statistically implausible output sequences by fusing a Language Model into a recognition Sequence to Sequence model. The idea is leveraging visually-conditioned and context-conditioned output distributions in order to automatically find and correct any mistakes that would otherwise break context significantly. We have found this approach to improve recognition results to 25.15 SER (%) from a previous best of 31.79 SER (%) in the literature.
Keywords: optical music recognition; historical sources; diversity; music theory; digital humanities
|
|
|
S.K. Jemni, Mohamed Ali Souibgui, Yousri Kessentini and Alicia Fornes. 2022. Enhance to Read Better: A Multi-Task Adversarial Network for Handwritten Document Image Enhancement. PR, 123, 108370.
Abstract: Handwritten document images can be highly affected by degradation for different reasons: Paper ageing, daily-life scenarios (wrinkles, dust, etc.), bad scanning process and so on. These artifacts raise many readability issues for current Handwritten Text Recognition (HTR) algorithms and severely devalue their efficiency. In this paper, we propose an end to end architecture based on Generative Adversarial Networks (GANs) to recover the degraded documents into a and form. Unlike the most well-known document binarization methods, which try to improve the visual quality of the degraded document, the proposed architecture integrates a handwritten text recognizer that promotes the generated document image to be more readable. To the best of our knowledge, this is the first work to use the text information while binarizing handwritten documents. Extensive experiments conducted on degraded Arabic and Latin handwritten documents demonstrate the usefulness of integrating the recognizer within the GAN architecture, which improves both the visual quality and the readability of the degraded document images. Moreover, we outperform the state of the art in H-DIBCO challenges, after fine tuning our pre-trained model with synthetically degraded Latin handwritten images, on this task.
|
|
|
Sergi Garcia Bordils and 7 others. 2022. Out-of-Vocabulary Challenge Report. Proceedings European Conference on Computer Vision Workshops.359–375. (LNCS.)
Abstract: This paper presents final results of the Out-Of-Vocabulary 2022 (OOV) challenge. The OOV contest introduces an important aspect that is not commonly studied by Optical Character Recognition (OCR) models, namely, the recognition of unseen scene text instances at training time. The competition compiles a collection of public scene text datasets comprising of 326,385 images with 4,864,405 scene text instances, thus covering a wide range of data distributions. A new and independent validation and test set is formed with scene text instances that are out of vocabulary at training time. The competition was structured in two tasks, end-to-end and cropped scene text recognition respectively. A thorough analysis of results from baselines and different participants is presented. Interestingly, current state-of-the-art models show a significant performance gap under the newly studied setting. We conclude that the OOV dataset proposed in this challenge will be an essential area to be explored in order to develop scene text models that achieve more robust and generalized predictions.
|
|
|
Sergi Garcia Bordils and 6 others. 2022. Read While You Drive-Multilingual Text Tracking on the Road. 15th IAPR International workshop on document analysis systems.756–770. (LNCS.)
Abstract: Visual data obtained during driving scenarios usually contain large amounts of text that conveys semantic information necessary to analyse the urban environment and is integral to the traffic control plan. Yet, research on autonomous driving or driver assistance systems typically ignores this information. To advance research in this direction, we present RoadText-3K, a large driving video dataset with fully annotated text. RoadText-3K is three times bigger than its predecessor and contains data from varied geographical locations, unconstrained driving conditions and multiple languages and scripts. We offer a comprehensive analysis of tracking by detection and detection by tracking methods exploring the limits of state-of-the-art text detection. Finally, we propose a new end-to-end trainable tracking model that yields state-of-the-art results on this challenging dataset. Our experiments demonstrate the complexity and variability of RoadText-3K and establish a new, realistic benchmark for scene text tracking in the wild.
|
|
|
Utkarsh Porwal, Alicia Fornes and Faisal Shafait, eds. 2022. Frontiers in Handwriting Recognition. International Conference on Frontiers in Handwriting Recognition. 18th International Conference, ICFHR 2022. Springer. (LNCS.)
|
|
2021 |
|
Adria Molina, Pau Riba, Lluis Gomez, Oriol Ramos Terrades and Josep Llados. 2021. Date Estimation in the Wild of Scanned Historical Photos: An Image Retrieval Approach. 16th International Conference on Document Analysis and Recognition.306–320. (LNCS.)
Abstract: This paper presents a novel method for date estimation of historical photographs from archival sources. The main contribution is to formulate the date estimation as a retrieval task, where given a query, the retrieved images are ranked in terms of the estimated date similarity. The closer are their embedded representations the closer are their dates. Contrary to the traditional models that design a neural network that learns a classifier or a regressor, we propose a learning objective based on the nDCG ranking metric. We have experimentally evaluated the performance of the method in two different tasks: date estimation and date-sensitive image retrieval, using the DEW public database, overcoming the baseline methods.
|
|
|
Albert Suso, Pau Riba, Oriol Ramos Terrades and Josep Llados. 2021. A Self-supervised Inverse Graphics Approach for Sketch Parametrization. 16th International Conference on Document Analysis and Recognition.28–42. (LNCS.)
Abstract: The study of neural generative models of handwritten text and human sketches is a hot topic in the computer vision field. The landmark SketchRNN provided a breakthrough by sequentially generating sketches as a sequence of waypoints, and more recent articles have managed to generate fully vector sketches by coding the strokes as Bézier curves. However, the previous attempts with this approach need them all a ground truth consisting in the sequence of points that make up each stroke, which seriously limits the datasets the model is able to train in. In this work, we present a self-supervised end-to-end inverse graphics approach that learns to embed each image to its best fit of Bézier curves. The self-supervised nature of the training process allows us to train the model in a wider range of datasets, but also to perform better after-training predictions by applying an overfitting process on the input binary image. We report qualitative an quantitative evaluations on the MNIST and the Quick, Draw! datasets.
|
|
|
Andres Mafla, Rafael S. Rezende, Lluis Gomez, Diana Larlus and Dimosthenis Karatzas. 2021. StacMR: Scene-Text Aware Cross-Modal Retrieval. IEEE Winter Conference on Applications of Computer Vision.2219–2229.
|
|
|
Andres Mafla and 6 others. 2021. Real-time Lexicon-free Scene Text Retrieval. PR, 110, 107656.
Abstract: In this work, we address the task of scene text retrieval: given a text query, the system returns all images containing the queried text. The proposed model uses a single shot CNN architecture that predicts bounding boxes and builds a compact representation of spotted words. In this way, this problem can be modeled as a nearest neighbor search of the textual representation of a query over the outputs of the CNN collected from the totality of an image database. Our experiments demonstrate that the proposed model outperforms previous state-of-the-art, while offering a significant increase in processing speed and unmatched expressiveness with samples never seen at training time. Several experiments to assess the generalization capability of the model are conducted in a multilingual dataset, as well as an application of real-time text spotting in videos.
|
|
|
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez and Dimosthenis Karatzas. 2021. Multi-modal reasoning graph for scene-text based fine-grained image classification and retrieval. IEEE Winter Conference on Applications of Computer Vision.4022–4032.
|
|