|
Alloy Das, Sanket Biswas, Umapada Pal and Josep Llados. 2024. Diving into the Depths of Spotting Text in Multi-Domain Noisy Scenes. IEEE International Conference on Robotics and Automation in PACIFICO.
Abstract: When used in a real-world noisy environment, the capacity to generalize to multiple domains is essential for any autonomous scene text spotting system. However, existing state-of-the-art methods employ pretraining and fine-tuning strategies on natural scene datasets, which do not exploit the feature interaction across other complex domains. In this work, we explore and investigate the problem of domain-agnostic scene text spotting, i.e., training a model on multi-domain source data such that it can directly generalize to target domains rather than being specialized for a specific domain or scenario. In this regard, we present the community a text spotting validation benchmark called Under-Water Text (UWT) for noisy underwater scenes to establish an important case study. Moreover, we also design an efficient super-resolution based end-to-end transformer baseline called DA-TextSpotter which achieves comparable or superior performance over existing text spotting architectures for both regular and arbitrary-shaped scene text spotting benchmarks in terms of both accuracy and model efficiency. The dataset, code and pre-trained models will be released upon acceptance.
|
|
|
Alloy Das, Sanket Biswas, Ayan Banerjee, Josep Llados, Umapada Pal and Saumik Bhattacharya. 2024. Harnessing the Power of Multi-Lingual Datasets for Pre-training: Towards Enhancing Text Spotting Performance. Winter Conference on Applications of Computer Vision.718–728.
Abstract: The adaptation capability to a wide range of domains is crucial for scene text spotting models when deployed to real-world conditions. However, existing state-of-the-art (SOTA) approaches usually incorporate scene text detection and recognition simply by pretraining on natural scene text datasets, which do not directly exploit the intermediate feature representations between multiple domains. Here, we investigate the problem of domain-adaptive scene text spotting, i.e., training a model on multi-domain source data such that it can directly adapt to target domains rather than being specialized for a specific domain or scenario. Further, we investigate a transformer baseline called Swin-TESTR to focus on solving scene-text spotting for both regular and arbitrary-shaped scene text along with an exhaustive evaluation. The results clearly demonstrate the potential of intermediate representations to achieve significant performance on text spotting benchmarks across multiple domains (e.g. language, synth-to-real, and documents). both in terms of accuracy and efficiency.
|
|
|
Subhajit Maity and 6 others. 2023. SelfDocSeg: A Self-Supervised vision-based Approach towards Document Segmentation. 17th International Conference on Doccument Analysis and Recognition.342–360.
Abstract: Document layout analysis is a known problem to the documents research community and has been vastly explored yielding a multitude of solutions ranging from text mining, and recognition to graph-based representation, visual feature extraction, etc. However, most of the existing works have ignored the crucial fact regarding the scarcity of labeled data. With growing internet connectivity to personal life, an enormous amount of documents had been available in the public domain and thus making data annotation a tedious task. We address this challenge using self-supervision and unlike, the few existing self-supervised document segmentation approaches which use text mining and textual labels, we use a complete vision-based approach in pre-training without any ground-truth label or its derivative. Instead, we generate pseudo-layouts from the document images to pre-train an image encoder to learn the document object representation and localization in a self-supervised framework before fine-tuning it with an object detection model. We show that our pipeline sets a new benchmark in this context and performs at par with the existing methods and the supervised counterparts, if not outperforms. The code is made publicly available at: this https URL
|
|
|
Sergi Garcia Bordils, Dimosthenis Karatzas and Marçal Rusiñol. 2024. STEP – Towards Structured Scene-Text Spotting. Winter Conference on Applications of Computer Vision.883–892.
Abstract: We introduce the structured scene-text spotting task, which requires a scene-text OCR system to spot text in the wild according to a query regular expression. Contrary to generic scene text OCR, structured scene-text spotting seeks to dynamically condition both scene text detection and recognition on user-provided regular expressions. To tackle this task, we propose the Structured TExt sPotter (STEP), a model that exploits the provided text structure to guide the OCR process. STEP is able to deal with regular expressions that contain spaces and it is not bound to detection at the word-level granularity. Our approach enables accurate zero-shot structured text spotting in a wide variety of real-world reading scenarios and is solely trained on publicly available data. To demonstrate the effectiveness of our approach, we introduce a new challenging test dataset that contains several types of out-of-vocabulary structured text, reflecting important reading applications of fields such as prices, dates, serial numbers, license plates etc. We demonstrate that STEP can provide specialised OCR performance on demand in all tested scenarios.
|
|
|
Josep Llados and Gemma Sanchez. 2004. Graph Matching vs. Graph Parsing in Graphics Recognition: A Combined Approach.
|
|
|
Ernest Valveny and Philippe Dosch. 2006. A general framework for the evaluation of symbol recognition methods.
|
|
|
Josep Llados and Dorothea Blostein. 2007. Special Issue on Graphics Recognition. Guest Editors.
|
|
|
Ernest Valveny and 11 others. 2006. A general framework for the evaluation of symbol recognition methods.
|
|
|
Gemma Sanchez, Alicia Fornes, Joan Mas and Josep Llados. 2007. Computer Vision Tools for Visually Impaired Children Learning.
|
|
|
Gemma Sanchez, Alicia Fornes, Joan Mas and Josep Llados. 2007. Computer Vision Tools for Visually Impaired Children Learning.
|
|