2019 |
|
Arnau Baro, Jialuo Chen, Alicia Fornes and Beata Megyesi. 2019. Towards a generic unsupervised method for transcription of encoded manuscripts. 3rd International Conference on Digital Access to Textual Cultural Heritage.73–78.
Abstract: Historical ciphers, a special type of manuscripts, contain encrypted information, important for the interpretation of our history. The first step towards decipherment is to transcribe the images, either manually or by automatic image processing techniques. Despite the improvements in handwritten text recognition (HTR) thanks to deep learning methodologies, the need of labelled data to train is an important limitation. Given that ciphers often use symbol sets across various alphabets and unique symbols without any transcription scheme available, these supervised HTR techniques are not suitable to transcribe ciphers. In this paper we propose an un-supervised method for transcribing encrypted manuscripts based on clustering and label propagation, which has been successfully applied to community detection in networks. We analyze the performance on ciphers with various symbol sets, and discuss the advantages and drawbacks compared to supervised HTR methods.
Keywords: A. Baró, J. Chen, A. Fornés, B. Megyesi.
|
|
|
Arnau Baro, Pau Riba, Jorge Calvo-Zaragoza and Alicia Fornes. 2019. From Optical Music Recognition to Handwritten Music Recognition: a Baseline. PRL, 123, 1–8.
Abstract: Optical Music Recognition (OMR) is the branch of document image analysis that aims to convert images of musical scores into a computer-readable format. Despite decades of research, the recognition of handwritten music scores, concretely the Western notation, is still an open problem, and the few existing works only focus on a specific stage of OMR. In this work, we propose a full Handwritten Music Recognition (HMR) system based on Convolutional Recurrent Neural Networks, data augmentation and transfer learning, that can serve as a baseline for the research community.
|
|
|
Asma Bensalah, Pau Riba, Alicia Fornes and Josep Llados. 2019. Shoot less and Sketch more: An Efficient Sketch Classification via Joining Graph Neural Networks and Few-shot Learning. 13th IAPR International Workshop on Graphics Recognition.80–85.
Abstract: With the emergence of the touchpad devices and drawing tablets, a new era of sketching started afresh. However, the recognition of sketches is still a tough task due to the variability of the drawing styles. Moreover, in some application scenarios there is few labelled data available for training,
which imposes a limitation for deep learning architectures. In addition, in many cases there is a need to generate models able to adapt to new classes. In order to cope with these limitations, we propose a method based on few-shot learning and graph neural networks for classifying sketches aiming for an efficient neural model. We test our approach with several databases of
sketches, showing promising results.
Keywords: Sketch classification; Convolutional Neural Network; Graph Neural Network; Few-shot learning
|
|
|
Chee-Kheng Chng and 13 others. 2019. ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text – RRC-ArT. 15th International Conference on Document Analysis and Recognition.1571–1576.
Abstract: This paper reports the ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text – RRC-ArT that consists of three major challenges: i) scene text detection, ii) scene text recognition, and iii) scene text spotting. A total of 78 submissions from 46 unique teams/individuals were received for this competition. The top performing score of each challenge is as follows: i) T1 – 82.65%, ii) T2.1 – 74.3%, iii) T2.2 – 85.32%, iv) T3.1 – 53.86%, and v) T3.2 – 54.91%. Apart from the results, this paper also details the ArT dataset, tasks description, evaluation metrics and participants' methods. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.
|
|
|
Dena Bazazian, Raul Gomez, Anguelos Nicolaou, Lluis Gomez, Dimosthenis Karatzas and Andrew Bagdanov. 2019. Fast: Facilitated and accurate scene text proposals through fcn guided pruning. PRL, 119, 112–120.
Abstract: Class-specific text proposal algorithms can efficiently reduce the search space for possible text object locations in an image. In this paper we combine the Text Proposals algorithm with Fully Convolutional Networks to efficiently reduce the number of proposals while maintaining the same recall level and thus gaining a significant speed up. Our experiments demonstrate that such text proposal approaches yield significantly higher recall rates than state-of-the-art text localization techniques, while also producing better-quality localizations. Our results on the ICDAR 2015 Robust Reading Competition (Challenge 4) and the COCO-text datasets show that, when combined with strong word classifiers, this recall margin leads to state-of-the-art results in end-to-end scene text recognition.
|
|
|
Ekta Vats, Anders Hast and Alicia Fornes. 2019. Training-Free and Segmentation-Free Word Spotting using Feature Matching and Query Expansion. 15th International Conference on Document Analysis and Recognition.1294–1299.
Abstract: Historical handwritten text recognition is an interesting yet challenging problem. In recent times, deep learning based methods have achieved significant performance in handwritten text recognition. However, handwriting recognition using deep learning needs training data, and often, text must be previously segmented into lines (or even words). These limitations constrain the application of HTR techniques in document collections, because training data or segmented words are not always available. Therefore, this paper proposes a training-free and segmentation-free word spotting approach that can be applied in unconstrained scenarios. The proposed word spotting framework is based on document query word expansion and relaxed feature matching algorithm, which can easily be parallelised. Since handwritten words posses distinct shape and characteristics, this work uses a combination of different keypoint detectors
and Fourier-based descriptors to obtain a sufficient degree of relaxed matching. The effectiveness of the proposed method is empirically evaluated on well-known benchmark datasets using standard evaluation measures. The use of informative features along with query expansion significantly contributed in efficient performance of the proposed method.
Keywords: Word spotting; Segmentation-free; Trainingfree; Query expansion; Feature matching
|
|
|
Fernando Vilariño. 2019. 3D Scanning of Capitals at Library Living Lab.
|
|
|
Fernando Vilariño. 2019. Library Living Lab, Numérisation 3D des chapiteaux du cloître de Saint-Cugat : des citoyens co- créant le nouveau patrimoine culturel numérique. Intersectorialité et approche Living Labs. Entretiens Jacques-Cartier.
|
|
|
Fernando Vilariño. 2019. Public Libraries Exploring how technology transforms the cultural experience of people. Workshop on Social Impact of AI. Open Living Lab Days Conference..
|
|
|
Helena Muñoz, Fernando Vilariño and Dimosthenis Karatzas. 2019. Eye-Movements During Information Extraction from Administrative Documents. International Conference on Document Analysis and Recognition Workshops.6–9.
Abstract: A key aspect of digital mailroom processes is the extraction of relevant information from administrative documents. More often than not, the extraction process cannot be fully automated, and there is instead an important amount of manual intervention. In this work we study the human process of information extraction from invoice document images. We explore whether the gaze of human annotators during an manual information extraction process could be exploited towards reducing the manual effort and automating the process. To this end, we perform an eye-tracking experiment replicating real-life interfaces for information extraction. Through this pilot study we demonstrate that relevant areas in the document can be identified reliably through automatic fixation classification, and the obtained models generalize well to new subjects. Our findings indicate that it is in principle possible to integrate the human in the document image analysis loop, making use of the scanpath to automate the extraction process or verify extracted information.
|
|