|
Alicia Fornes, Josep Llados, Oriol Ramos Terrades, & Marçal Rusiñol. (2016). La Visió per Computador com a Eina per a la Interpretació Automàtica de Fonts Documentals. Lligall, Revista Catalana d'Arxivística, 20–46.
|
|
|
Josep Llados, & Enric Marti. (1999). A graph-edit algorithm for hand-drawn graphical document recognition and their automatic introduction into CAD systems. Machine Graphics & Vision, 8, 195–211.
|
|
|
Josep Llados, & Enric Marti. (1999). Graph-edit algorithms for hand-drawn graphical document recognition and their automatic introduction. Machine Graphics & Vision journal, special issue on Graph transformation, .
|
|
|
L. Rothacker, Marçal Rusiñol, Josep Llados, & G.A. Fink. (2014). A Two-stage Approach to Segmentation-Free Query-by-example Word Spotting. Manuscript Cultures, 47–58.
Abstract: With the ongoing progress in digitization, huge document collections and archives have become available to a broad audience. Scanned document images can be transmitted electronically and studied simultaneously throughout the world. While this is very beneficial, it is often impossible to perform automated searches on these document collections. Optical character recognition usually fails when it comes to handwritten or historic documents. In order to address the need for exploring document collections rapidly, researchers are working on word spotting. In query-by-example word spotting scenarios, the user selects an exemplary occurrence of the query word in a document image. The word spotting system then retrieves all regions in the collection that are visually similar to the given example of the query word. The best matching regions are presented to the user and no actual transcription is required.
An important property of a word spotting system is the computational speed with which queries can be executed. In our previous work, we presented a relatively slow but high-precision method. In the present work, we will extend this baseline system to an integrated two-stage approach. In a coarse-grained first stage, we will filter document images efficiently in order to identify regions that are likely to contain the query word. In the fine-grained second stage, these regions will be analyzed with our previously presented high-precision method. Finally, we will report recognition results and query times for the well-known George Washington
benchmark in our evaluation. We achieve state-of-the-art recognition results while the query times can be reduced to 50% in comparison with our baseline.
|
|
|
Palaiahnakote Shivakumara, Anjan Dutta, Chew Lim Tan, & Umapada Pal. (2014). Multi-oriented scene text detection in video based on wavelet and angle projection boundary growing. MTAP - Multimedia Tools and Applications, 72(1), 515–539.
Abstract: In this paper, we address two complex issues: 1) Text frame classification and 2) Multi-oriented text detection in video text frame. We first divide a video frame into 16 blocks and propose a combination of wavelet and median-moments with k-means clustering at the block level to identify probable text blocks. For each probable text block, the method applies the same combination of feature with k-means clustering over a sliding window running through the blocks to identify potential text candidates. We introduce a new idea of symmetry on text candidates in each block based on the observation that pixel distribution in text exhibits a symmetric pattern. The method integrates all blocks containing text candidates in the frame and then all text candidates are mapped on to a Sobel edge map of the original frame to obtain text representatives. To tackle the multi-orientation problem, we present a new method called Angle Projection Boundary Growing (APBG) which is an iterative algorithm and works based on a nearest neighbor concept. APBG is then applied on the text representatives to fix the bounding box for multi-oriented text lines in the video frame. Directional information is used to eliminate false positives. Experimental results on a variety of datasets such as non-horizontal, horizontal, publicly available data (Hua’s data) and ICDAR-03 competition data (camera images) show that the proposed method outperforms existing methods proposed for video and the state of the art methods for scene text as well.
|
|