|
Lluis Pere de las Heras. (2014). Relational Models for Visual Understanding of Graphical Documents. Application to Architectural Drawings. (Gemma Sanchez, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Graphical documents express complex concepts using a visual language. This language consists of a vocabulary (symbols) and a syntax (structural relations between symbols) that articulate a semantic meaning in a certain context. Therefore, the automatic interpretation by computers of these sort of documents entails three main steps: the detection of the symbols, the extraction of the structural relations between these symbols, and the modeling of the knowledge that permits the extraction of the semantics. Dierent domains in graphical documents include: architectural and engineering drawings, maps, owcharts, etc.
Graphics Recognition in particular and Document Image Analysis in general are
born from the industrial need of interpreting a massive amount of digitalized documents after the emergence of the scanner. Although many years have passed, the graphical document understanding problem still seems to be far from being solved. The main reason is that the vast majority of the systems in the literature focus on very specic problems, where the domain of the document dictates the implementation of the interpretation. As a result, it is dicult to reuse these strategies on dierent data and on dierent contexts, hindering thus the natural progress in the eld.
In this thesis, we face the graphical document understanding problem by proposing several relational models at dierent levels that are designed from a generic perspective. Firstly, we introduce three dierent strategies for the detection of symbols. The first method tackles the problem structurally, wherein general knowledge of the domain guides the detection. The second is a statistical method that learns the graphical appearance of the symbols and easily adapts to the big variability of the problem. The third method is a combination of the previous two methods that inherits their respective strengths, i.e. copes the big variability and does not need annotated data. Secondly, we present two relational strategies that tackle the problem of the visual context extraction. The first one is a full bottom up method that heuristically searches in a graph representation the contextual relations between symbols. Contrarily, the second is syntactic method that models probabilistically the structure of the documents. It automatically learns the model, which guides the inference algorithm to encounter the best structural representation for a given input. Finally, we construct a knowledge-based model consisting of an ontological denition of the domain and real data. This model permits to perform contextual reasoning and to detect semantic inconsistencies within the data. We evaluate the suitability of the proposed contributions in the framework of floor plan interpretation. Since there is no standard in the modeling of these documents there exists an enormous notation variability from plan to plan in terms of vocabulary and syntax. Therefore, floor plan interpretation is a relevant task in the graphical document understanding problem. It is also worth to mention that we make freely available all the resources used in this thesis {the data, the tool used to generate the data, and the evaluation scripts{ with the aim of fostering research in the graphical document understanding task.
|
|
|
Lluis Gomez, & Dimosthenis Karatzas. (2014). MSER-based Real-Time Text Detection and Tracking. In 22nd International Conference on Pattern Recognition (pp. 3110–3115).
Abstract: We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the fulldetection approaches in terms of time performance optimization.
A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.
|
|
|
Lluis Gomez, & Dimosthenis Karatzas. (2014). Scene Text Recognition: No Country for Old Men? In 1st International Workshop on Robust Reading.
|
|
|
Laura Igual, Xavier Perez Sala, Sergio Escalera, Cecilio Angulo, & Fernando De la Torre. (2014). Continuous Generalized Procrustes Analysis. PR - Pattern Recognition, 47(2), 659–671.
Abstract: PR4883, PII: S0031-3203(13)00327-0
Two-dimensional shape models have been successfully applied to solve many problems in computer vision, such as object tracking, recognition, and segmentation. Typically, 2D shape models are learned from a discrete set of image landmarks (corresponding to projection of 3D points of an object), after applying Generalized Procustes Analysis (GPA) to remove 2D rigid transformations. However, the
standard GPA process suffers from three main limitations. Firstly, the 2D training samples do not necessarily cover a uniform sampling of all the 3D transformations of an object. This can bias the estimate of the shape model. Secondly, it can be computationally expensive to learn the shape model by sampling 3D transformations. Thirdly, standard GPA methods use only one reference shape, which can might be insufficient to capture large structural variability of some objects.
To address these drawbacks, this paper proposes continuous generalized Procrustes analysis (CGPA).
CGPA uses a continuous formulation that avoids the need to generate 2D projections from all the rigid 3D transformations. It builds an efficient (in space and time) non-biased 2D shape model from a set of 3D model of objects. A major challenge in CGPA is the need to integrate over the space of 3D rotations, especially when the rotations are parameterized with Euler angles. To address this problem, we introduce the use of the Haar measure. Finally, we extended CGPA to incorporate several reference shapes. Experimental results on synthetic and real experiments show the benefits of CGPA over GPA.
Keywords: Procrustes analysis; 2D shape model; Continuous approach
|
|
|
L. Rothacker, Marçal Rusiñol, Josep Llados, & G.A. Fink. (2014). A Two-stage Approach to Segmentation-Free Query-by-example Word Spotting. Manuscript Cultures, 47–58.
Abstract: With the ongoing progress in digitization, huge document collections and archives have become available to a broad audience. Scanned document images can be transmitted electronically and studied simultaneously throughout the world. While this is very beneficial, it is often impossible to perform automated searches on these document collections. Optical character recognition usually fails when it comes to handwritten or historic documents. In order to address the need for exploring document collections rapidly, researchers are working on word spotting. In query-by-example word spotting scenarios, the user selects an exemplary occurrence of the query word in a document image. The word spotting system then retrieves all regions in the collection that are visually similar to the given example of the query word. The best matching regions are presented to the user and no actual transcription is required.
An important property of a word spotting system is the computational speed with which queries can be executed. In our previous work, we presented a relatively slow but high-precision method. In the present work, we will extend this baseline system to an integrated two-stage approach. In a coarse-grained first stage, we will filter document images efficiently in order to identify regions that are likely to contain the query word. In the fine-grained second stage, these regions will be analyzed with our previously presented high-precision method. Finally, we will report recognition results and query times for the well-known George Washington
benchmark in our evaluation. We achieve state-of-the-art recognition results while the query times can be reduced to 50% in comparison with our baseline.
|
|
|
Klaus Broelemann, Anjan Dutta, Xiaoyi Jiang, & Josep Llados. (2014). Hierarchical Plausibility-Graphs for Symbol Spotting in Graphical Documents. In Bart Lamiroy, & Jean-Marc Ogier (Eds.), Graphics Recognition. Current Trends and Challenges (Vol. 8746, pp. 25–37). LNCS. Springer Berlin Heidelberg.
Abstract: Graph representation of graphical documents often suffers from noise such as spurious nodes and edges, and their discontinuity. In general these errors occur during the low-level image processing viz. binarization, skeletonization, vectorization etc. Hierarchical graph representation is a nice and efficient way to solve this kind of problem by hierarchically merging node-node and node-edge depending on the distance. But the creation of hierarchical graph representing the graphical information often uses hard thresholds on the distance to create the hierarchical nodes (next state) of the lower nodes (or states) of a graph. As a result, the representation often loses useful information. This paper introduces plausibilities to the nodes of hierarchical graph as a function of distance and proposes a modified algorithm for matching subgraphs of the hierarchical graphs. The plausibility-annotated nodes help to improve the performance of the matching algorithm on two hierarchical structures. To show the potential of this approach, we conduct an experiment with the SESYD dataset.
|
|
|
Juan Ramon Terven Salinas, Joaquin Salas, & Bogdan Raducanu. (2014). New Opportunities for Computer Vision-Based Assistive Technology Systems for the Visually Impaired. COMP - Computer, 47(4), 52–58.
Abstract: Computing advances and increased smartphone use gives technology system designers greater flexibility in exploiting computer vision to support visually impaired users. Understanding these users' needs will certainly provide insight for the development of improved usability of computing devices.
|
|
|
Juan Ramon Terven Salinas, Joaquin Salas, & Bogdan Raducanu. (2014). Robust Head Gestures Recognition for Assistive Technology. In Pattern Recognition (Vol. 8495, pp. 152–161). LNCS. Springer International Publishing.
Abstract: This paper presents a system capable of recognizing six head gestures: nodding, shaking, turning right, turning left, looking up, and looking down. The main difference of our system compared to other methods is that the Hidden Markov Models presented in this paper, are fully connected and consider all possible states in any given order, providing the following advantages to the system: (1) allows unconstrained movement of the head and (2) it can be easily integrated into a wearable device (e.g. glasses, neck-hung devices), in which case it can robustly recognize gestures in the presence of ego-motion. Experimental results show that this approach outperforms common methods that use restricted HMMs for each gesture.
|
|
|
Josep Llados, & Marçal Rusiñol. (2014). Graphics Recognition Techniques. In D. Doermann, & K. Tombre (Eds.), Handbook of Document Image Processing and Recognition (Vol. D, pp. 489–521). Springer London.
Abstract: This chapter describes the most relevant approaches for the analysis of graphical documents. The graphics recognition pipeline can be splitted into three tasks. The low level or lexical task extracts the basic units composing the document. The syntactic level is focused on the structure, i.e., how graphical entities are constructed, and involves the location and classification of the symbols present in the document. The third level is a functional or semantic level, i.e., it models what the graphical symbols do and what they mean in the context where they appear. This chapter covers the lexical level, while the next two chapters are devoted to the syntactic and semantic level, respectively. The main problems reviewed in this chapter are raster-to-vector conversion (vectorization algorithms) and the separation of text and graphics components. The research and industrial communities have provided standard methods achieving reasonable performance levels. Hence, graphics recognition techniques can be considered to be in a mature state from a scientific point of view. Additionally this chapter provides insights on some related problems, namely, the extraction and recognition of dimensions in engineering drawings, and the recognition of hatched and tiled patterns. Both problems are usually associated, even integrated, in the vectorization process.
Keywords: Dimension recognition; Graphics recognition; Graphic-rich documents; Polygonal approximation; Raster-to-vector conversion; Texture-based primitive extraction; Text-graphics separation
|
|
|
Jose Manuel Alvarez, Antonio Lopez, Theo Gevers, & Felipe Lumbreras. (2014). Combining Priors, Appearance and Context for Road Detection. TITS - IEEE Transactions on Intelligent Transportation Systems, 15(3), 1168–1178.
Abstract: Detecting the free road surface ahead of a moving vehicle is an important research topic in different areas of computer vision, such as autonomous driving or car collision warning.
Current vision-based road detection methods are usually based solely on low-level features. Furthermore, they generally assume structured roads, road homogeneity, and uniform lighting conditions, constraining their applicability in real-world scenarios. In this paper, road priors and contextual information are introduced for road detection. First, we propose an algorithm to estimate road priors online using geographical information, providing relevant initial information about the road location. Then, contextual cues, including horizon lines, vanishing points, lane markings, 3-D scene layout, and road geometry, are used in addition to low-level cues derived from the appearance of roads. Finally, a generative model is used to combine these cues and priors, leading to a road detection method that is, to a large degree, robust to varying imaging conditions, road types, and scenarios.
Keywords: Illuminant invariance; lane markings; road detection; road prior; road scene understanding; vanishing point; 3-D scene layout
|
|
|
Jorge Bernal, Joan M. Nuñez, F. Javier Sanchez, & Fernando Vilariño. (2014). Polyp Segmentation Method in Colonoscopy Videos by means of MSA-DOVA Energy Maps Calculation. In 3rd MICCAI Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging (Vol. 8680, pp. 41–49).
Abstract: In this paper we present a novel polyp region segmentation method for colonoscopy videos. Our method uses valley information associated to polyp boundaries in order to provide an initial segmentation. This first segmentation is refined to eliminate boundary discontinuities caused by image artifacts or other elements of the scene. Experimental results over a publicly annotated database show that our method outperforms both general and specific segmentation methods by providing more accurate regions rich in polyp content. We also prove how image preprocessing is needed to improve final polyp region segmentation.
Keywords: Image segmentation; Polyps; Colonoscopy; Valley information; Energy maps
|
|
|
Jorge Bernal, Fernando Vilariño, F. Javier Sanchez, M. Arnold, Anarta Ghosh, & Gerard Lacey. (2014). Experts vs Novices: Applying Eye-tracking Methodologies in Colonoscopy Video Screening for Polyp Search. In 2014 Symposium on Eye Tracking Research and Applications (pp. 223–226).
Abstract: We present in this paper a novel study aiming at identifying the differences in visual search patterns between physicians of diverse levels of expertise during the screening of colonoscopy videos. Physicians were clustered into two groups -experts and novices- according to the number of procedures performed, and fixations were captured by an eye-tracker device during the task of polyp search in different video sequences. These fixations were integrated into heat maps, one for each cluster. The obtained maps were validated over a ground truth consisting of a mask of the polyp, and the comparison between experts and novices was performed by using metrics such as reaction time, dwelling time and energy concentration ratio. Experimental results show a statistically significant difference between experts and novices, and the obtained maps show to be a useful tool for the characterisation of the behaviour of each group.
|
|
|
Jorge Bernal, Debora Gil, Carles Sanchez, & F. Javier Sanchez. (2014). Discarding Non Informative Regions for Efficient Colonoscopy Image Analysis. In 1st MICCAI Workshop on Computer-Assisted and Robotic Endoscopy (Vol. 8899, pp. 1–10). LNCS. Springer International Publishing.
Abstract: In this paper we present a novel polyp region segmentation method for colonoscopy videos. Our method uses valley information associated to polyp boundaries in order to provide an initial segmentation. This first segmentation is refined to eliminate boundary discontinuities caused by image artifacts or other elements of the scene. Experimental results over a publicly annotated database show that our method outperforms both general and specific segmentation methods by providing more accurate regions rich in polyp content. We also prove how image preprocessing is needed to improve final polyp region segmentation.
Keywords: Image Segmentation; Polyps, Colonoscopy; Valley Information; Energy Maps
|
|
|
Jorge Bernal. (2014). Polyp Localization and Segmentation in Colonoscopy Images by Means of a Model of Appearance for Polyps. ELCVIA - Electronic Letters on Computer Vision and Image Analysis, 13(2), 9–10.
Abstract: Colorectal cancer is the fourth most common cause of cancer death worldwide and its survival rate depends on the stage in which it is detected on hence the necessity for an early colon screening. There are several screening techniques but colonoscopy is still nowadays the gold standard, although it has some drawbacks such as the miss rate. Our contribution, in the field of intelligent systems for colonoscopy, aims at providing a polyp localization and a polyp segmentation system based on a model of appearance for polyps. To develop both methods we define a model of appearance for polyps, which describes a polyp as enclosed by intensity valleys. The novelty of our contribution resides on the fact that we include in our model aspects of the image formation and we also consider the presence of other elements from the endoluminal scene such as specular highlights and blood vessels, which have an impact on the performance of our methods. In order to develop our polyp localization method we accumulate valley information in order to generate energy maps, which are also used to guide the polyp segmentation. Our methods achieve promising results in polyp localization and segmentation. As we want to explore the usability of our methods we present a comparative analysis between physicians fixations obtained via an eye tracking device and our polyp localization method. The results show that our method is indistinguishable to novice physicians although it is far from expert physicians.
Keywords: Colonoscopy; polyp localization; polyp segmentation; Eye-tracking
|
|
|
Jon Almazan, Albert Gordo, Alicia Fornes, & Ernest Valveny. (2014). Word Spotting and Recognition with Embedded Attributes. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(12), 2552–2566.
Abstract: This article addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.
|
|