|
Sergi Garcia Bordils and 6 others. 2022. Read While You Drive-Multilingual Text Tracking on the Road. 15th IAPR International workshop on document analysis systems.756–770. (LNCS.)
Abstract: Visual data obtained during driving scenarios usually contain large amounts of text that conveys semantic information necessary to analyse the urban environment and is integral to the traffic control plan. Yet, research on autonomous driving or driver assistance systems typically ignores this information. To advance research in this direction, we present RoadText-3K, a large driving video dataset with fully annotated text. RoadText-3K is three times bigger than its predecessor and contains data from varied geographical locations, unconstrained driving conditions and multiple languages and scripts. We offer a comprehensive analysis of tracking by detection and detection by tracking methods exploring the limits of state-of-the-art text detection. Finally, we propose a new end-to-end trainable tracking model that yields state-of-the-art results on this challenging dataset. Our experiments demonstrate the complexity and variability of RoadText-3K and establish a new, realistic benchmark for scene text tracking in the wild.
|
|
|
Alicia Fornes, Josep Llados, Joan Mas, Joana Maria Pujadas-Mora and Anna Cabre. 2014. A Bimodal Crowdsourcing Platform for Demographic Historical Manuscripts. Digital Access to Textual Cultural Heritage Conference.103–108.
Abstract: In this paper we present a crowdsourcing web-based application for extracting information from demographic handwritten document images. The proposed application integrates two points of view: the semantic information for demographic research, and the ground-truthing for document analysis research. Concretely, the application has the contents view, where the information is recorded into forms, and the labeling view, with the word labels for evaluating document analysis techniques. The crowdsourcing architecture allows to accelerate the information extraction (many users can work simultaneously), validate the information, and easily provide feedback to the users. We finally show how the proposed application can be extended to other kind of demographic historical manuscripts.
|
|
|
Arnau Baro, Jialuo Chen, Alicia Fornes and Beata Megyesi. 2019. Towards a generic unsupervised method for transcription of encoded manuscripts. 3rd International Conference on Digital Access to Textual Cultural Heritage.73–78.
Abstract: Historical ciphers, a special type of manuscripts, contain encrypted information, important for the interpretation of our history. The first step towards decipherment is to transcribe the images, either manually or by automatic image processing techniques. Despite the improvements in handwritten text recognition (HTR) thanks to deep learning methodologies, the need of labelled data to train is an important limitation. Given that ciphers often use symbol sets across various alphabets and unique symbols without any transcription scheme available, these supervised HTR techniques are not suitable to transcribe ciphers. In this paper we propose an un-supervised method for transcribing encrypted manuscripts based on clustering and label propagation, which has been successfully applied to community detection in networks. We analyze the performance on ciphers with various symbol sets, and discuss the advantages and drawbacks compared to supervised HTR methods.
Keywords: A. Baró, J. Chen, A. Fornés, B. Megyesi.
|
|
|
Alicia Fornes, Beata Megyesi and Joan Mas. 2017. Transcription of Encoded Manuscripts with Image Processing Techniques. Digital Humanities Conference.441–443.
|
|
|
Oriol Vicente, Alicia Fornes and Ramon Valdes. 2016. The Digital Humanities Network of the UABCie: a smart structure of research and social transference for the digital humanities. Digital Humanities Centres: Experiences and Perspectives.
|
|
|
Lasse Martensson, Anders Hast and Alicia Fornes. 2017. Word Spotting as a Tool for Scribal Attribution. 2nd Conference of the association of Digital Humanities in the Nordic Countries.87–89.
|
|
|
Oriol Ramos Terrades, N. Serrano, Albert Gordo, Ernest Valveny and Alfons Juan-Ciscar. 2010. Interactive-predictive detection of handwritten text blocks. 17th Document Recognition and Retrieval Conference, part of the IS&T-SPIE Electronic Imaging Symposium.75340Q–75340Q–10.
Abstract: A method for text block detection is introduced for old handwritten documents. The proposed method takes advantage of sequential book structure, taking into account layout information from pages previously transcribed. This glance at the past is used to predict the position of text blocks in the current page with the help of conventional layout analysis methods. The method is integrated into the GIDOC prototype: a first attempt to provide integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. Results are given in a transcription task on a 764-page Spanish manuscript from 1891.
|
|
|
Lluis Gomez, Andres Mafla, Marçal Rusiñol and Dimosthenis Karatzas. 2018. Single Shot Scene Text Retrieval. 15th European Conference on Computer Vision.728–744. (LNCS.)
Abstract: Textual information found in scene images provides high level semantic information about the image and its context and it can be leveraged for better scene understanding. In this paper we address the problem of scene text retrieval: given a text query, the system must return all images containing the queried text. The novelty of the proposed model consists in the usage of a single shot CNN architecture that predicts at the same time bounding boxes and a compact text representation of the words in them. In this way, the text based image retrieval task can be casted as a simple nearest neighbor search of the query text representation over the outputs of the CNN over the entire image
database. Our experiments demonstrate that the proposed architecture
outperforms previous state-of-the-art while it offers a significant increase
in processing speed.
Keywords: Image retrieval; Scene text; Word spotting; Convolutional Neural Networks; Region Proposals Networks; PHOC
|
|
|
Raul Gomez, Jaume Gibert, Lluis Gomez and Dimosthenis Karatzas. 2020. Location Sensitive Image Retrieval and Tagging. 16th European Conference on Computer Vision.
Abstract: People from different parts of the globe describe objects and concepts in distinct manners. Visual appearance can thus vary across different geographic locations, which makes location a relevant contextual information when analysing visual data. In this work, we address the task of image retrieval related to a given tag conditioned on a certain location on Earth. We present LocSens, a model that learns to rank triplets of images, tags and coordinates by plausibility, and two training strategies to balance the location influence in the final ranking. LocSens learns to fuse textual and location information of multimodal queries to retrieve related images at different levels of location granularity, and successfully utilizes location information to improve image tagging.
|
|
|
Lei Kang, Pau Riba, Yaxing Wang, Marçal Rusiñol, Alicia Fornes and Mauricio Villegas. 2020. GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images. 16th European Conference on Computer Vision.
Abstract: Although current image generation methods have reached impressive quality levels, they are still unable to produce plausible yet diverse images of handwritten words. On the contrary, when writing by hand, a great variability is observed across different writers, and even when analyzing words scribbled by the same individual, involuntary variations are conspicuous. In this work, we take a step closer to producing realistic and varied artificially rendered handwritten words. We propose a novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content. Our generator is guided by three complementary learning objectives: to produce realistic images, to imitate a certain handwriting style and to convey a specific textual content. Our model is unconstrained to any predefined vocabulary, being able to render whatever input word. Given a sample writer, it is also able to mimic its calligraphic features in a few-shot setup. We significantly advance over prior art and demonstrate with qualitative, quantitative and human-based evaluations the realistic aspect of our synthetically produced images.
|
|