|
Gemma Sanchez, Josep Llados and Enric Marti. 1997. Segmentation and analysis of linial texture in plans. Actes de la conférence Artificielle et Complexité.. Paris.
Abstract: The problem of texture segmentation and interpretation is one of the main concerns in the field of document analysis. Graphical documents often contain areas characterized by a structural texture whose recognition allows both the document understanding, and its storage in a more compact way. In this work, we focus on structural linial textures of regular repetition contained in plan documents. Starting from an atributed graph which represents the vectorized input image, we develop a method to segment textured areas and recognize their placement rules. We wish to emphasize that the searched textures do not follow a predefined pattern. Minimal closed loops of the input graph are computed, and then hierarchically clustered. In this hierarchical clustering, a distance function between two closed loops is defined in terms of their areas difference and boundary resemblance computed by a string matching procedure. Finally it is noted that, when the texture consists of isolated primitive elements, the same method can be used after computing a Voronoi Tesselation of the input graph.
Keywords: Structural Texture, Voronoi, Hierarchical Clustering, String Matching.
|
|
|
Gemma Sanchez, Josep Llados and K. Tombre. 2001. An Algorithm to Recognize Graphical Textured Symbols using String Representations..
|
|
|
Gemma Sanchez, Josep Llados and K. Tombre. 2001. An Error-Correction Graph Grammar to Recognize Textured Symbols..
|
|
|
Gemma Sanchez, Josep Llados and K. Tombre. 2002. A mean string algorithm to compute the average among a set of 2D shapes. PRL, 23(1-3), 203–214.
|
|
|
Gemma Sanchez, Josep Llados and K. Tombre. 2000. A mean string algorithm to compute the average among a set of 2D shapes.
|
|
|
George Tom, Minesh Mathew, Sergi Garcia Bordils, Dimosthenis Karatzas and CV Jawahar. 2023. ICDAR 2023 Competition on RoadText Video Text Detection, Tracking and Recognition. 17th International Conference on Document Analysis and Recognition.577–586. (LNCS.)
Abstract: In this report, we present the final results of the ICDAR 2023 Competition on RoadText Video Text Detection, Tracking and Recognition. The RoadText challenge is based on the RoadText-1K dataset and aims to assess and enhance current methods for scene text detection, recognition, and tracking in videos. The RoadText-1K dataset contains 1000 dash cam videos with annotations for text bounding boxes and transcriptions in every frame. The competition features an end-to-end task, requiring systems to accurately detect, track, and recognize text in dash cam videos. The paper presents a comprehensive review of the submitted methods along with a detailed analysis of the results obtained by the methods. The analysis provides valuable insights into the current capabilities and limitations of video text detection, tracking, and recognition systems for dashcam videos.
|
|
|
George Tom, Minesh Mathew, Sergi Garcia Bordils, Dimosthenis Karatzas and CV Jawahar. 2023. Reading Between the Lanes: Text VideoQA on the Road. 17th International Conference on Document Analysis and Recognition.137–154. (LNCS.)
Abstract: Text and signs around roads provide crucial information for drivers, vital for safe navigation and situational awareness. Scene text recognition in motion is a challenging problem, while textual cues typically appear for a short time span, and early detection at a distance is necessary. Systems that exploit such information to assist the driver should not only extract and incorporate visual and textual cues from the video stream but also reason over time. To address this issue, we introduce RoadTextVQA, a new dataset for the task of video question answering (VideoQA) in the context of driver assistance. RoadTextVQA consists of 3, 222 driving videos collected from multiple countries, annotated with 10, 500 questions, all based on text or road signs present in the driving videos. We assess the performance of state-of-the-art video question answering models on our RoadTextVQA dataset, highlighting the significant potential for improvement in this domain and the usefulness of the dataset in advancing research on in-vehicle support systems and text-aware multimodal question answering. The dataset is available at http://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqa.
Keywords: VideoQA; scene text; driving videos
|
|
|
Giacomo Magnifico, Beata Megyesi, Mohamed Ali Souibgui, Jialuo Chen and Alicia Fornes. 2022. Lost in Transcription of Graphic Signs in Ciphers. International Conference on Historical Cryptology (HistoCrypt 2022).153–158.
Abstract: Hand-written Text Recognition techniques with the aim to automatically identify and transcribe hand-written text have been applied to historical sources including ciphers. In this paper, we compare the performance of two machine learning architectures, an unsupervised method based on clustering and a deep learning method with few-shot learning. Both models are tested on seen and unseen data from historical ciphers with different symbol sets consisting of various types of graphic signs. We compare the models and highlight their differences in performance, with their advantages and shortcomings.
Keywords: transcription of ciphers; hand-written text recognition of symbols; graphic signs
|
|
|
Giuseppe De Gregorio and 6 others. 2022. A Few Shot Multi-representation Approach for N-Gram Spotting in Historical Manuscripts. Frontiers in Handwriting Recognition. International Conference on Frontiers in Handwriting Recognition (ICFHR2022).3–12. (LNCS.)
Abstract: Despite recent advances in automatic text recognition, the performance remains moderate when it comes to historical manuscripts. This is mainly because of the scarcity of available labelled data to train the data-hungry Handwritten Text Recognition (HTR) models. The Keyword Spotting System (KWS) provides a valid alternative to HTR due to the reduction in error rate, but it is usually limited to a closed reference vocabulary. In this paper, we propose a few-shot learning paradigm for spotting sequences of a few characters (N-gram) that requires a small amount of labelled training data. We exhibit that recognition of important n-grams could reduce the system’s dependency on vocabulary. In this case, an out-of-vocabulary (OOV) word in an input handwritten line image could be a sequence of n-grams that belong to the lexicon. An extensive experimental evaluation of our proposed multi-representation approach was carried out on a subset of Bentham’s historical manuscript collections to obtain some really promising results in this direction.
Keywords: N-gram spotting; Few-shot learning; Multimodal understanding; Historical handwritten collections
|
|
|
H. Chouaib, Oriol Ramos Terrades, Salvatore Tabbone, F. Cloppet and N. Vincent. 2008. Feature Selection Combining Genetic Algorithm and Adaboost Classifiers. 19th International Conference on Pattern Recognition.1–4.
|
|