Lluis Gomez, & Dimosthenis Karatzas. (2016). A fine-grained approach to scene text script identification. In 12th IAPR Workshop on Document Analysis Systems (pp. 192–197).
Abstract: This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images. We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a fine-grained classification framework. In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online.
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2010). Query Driven Word Retrieval in Graphical Documents. In 9th IAPR International Workshop on Document Analysis Systems (191–198).
Abstract: In this paper, we present an approach towards the retrieval of words from graphical document images. In graphical documents, due to presence of multi-oriented characters in non-structured layout, word indexing is a challenging task. The proposed approach uses recognition results of individual components to form character pairs with the neighboring components. An indexing scheme is designed to store the spatial description of components and to access them efficiently. Given a query text word (ascii/unicode format), the character pairs present in it are searched in the document. Next the retrieved character pairs are linked sequentially to form character string. Dynamic programming is applied to find different instances of query words. A string edit distance is used here to match the query word as the objective function. Recognition of multi-scale and multi-oriented character component is done using Support Vector Machine classifier. To consider multi-oriented character strings the features used in the SVM are invariant to character orientation. Experimental results show that the method is efficient to locate a query word from multi-oriented text in graphical documents.
|
T.O. Nguyen, Salvatore Tabbone, & Oriol Ramos Terrades. (2008). Symbol Descriptor Based on Shape Context and Vector Model of Information Retrieval. In Proceedings of the 8th IAPR International Workshop on Document Analysis Systems, (pp. 191–197).
|
Jorge Bernal, Aymeric Histace, Marc Masana, Quentin Angermann, Cristina Sanchez Montes, Cristina Rodriguez de Miguel, et al. (2019). GTCreator: a flexible annotation tool for image-based datasets. IJCAR - International Journal of Computer Assisted Radiology and Surgery, 14(2), 191–201.
Abstract: Abstract Purpose: Methodology evaluation for decision support systems for health is a time consuming-task. To assess performance of polyp detection
methods in colonoscopy videos, clinicians have to deal with the annotation
of thousands of images. Current existing tools could be improved in terms of
exibility and ease of use. Methods:We introduce GTCreator, a exible annotation tool for providing image and text annotations to image-based datasets.
It keeps the main basic functionalities of other similar tools while extending
other capabilities such as allowing multiple annotators to work simultaneously
on the same task or enhanced dataset browsing and easy annotation transfer aiming to speed up annotation processes in large datasets. Results: The
comparison with other similar tools shows that GTCreator allows to obtain
fast and precise annotation of image datasets, being the only one which offers
full annotation editing and browsing capabilites. Conclusions: Our proposed
annotation tool has been proven to be efficient for large image dataset annota-
tion, as well as showing potential of use in other stages of method evaluation
such as experimental setup or results analysis.
Keywords: Annotation tool; Validation Framework; Benchmark; Colonoscopy; Evaluation
|
David Geronimo, Angel Sappa, & Antonio Lopez. (2010). Stereo-based Candidate Generation for Pedestrian Protection Systems. In Binocular Vision: Development, Depth Perception and Disorders (189–208). NOVA Publishers.
Abstract: This chapter describes a stereo-based algorithm that provides candidate image windows to a latter 2D classification stage in an on-board pedestrian detection system. The proposed algorithm, which consists of three stages, is based on the use of both stereo imaging and scene prior knowledge (i.e., pedestrians are on the ground) to reduce the candidate searching space. First, a successful road surface fitting algorithm provides estimates on the relative ground-camera pose. This stage directs the search toward the road area thus avoiding irrelevant regions like the sky. Then, three different schemes are used to scan the estimated road surface with pedestrian-sized windows: (a) uniformly distributed through the road surface (3D); (b) uniformly distributed through the image (2D); (c) not uniformly distributed but according to a quadratic function (combined 2D-3D). Finally, the set of candidate windows is reduced by analyzing their 3D content. Experimental results of the proposed algorithm, together with statistics of searching space reduction are provided.
Keywords: Pedestrian Detection
|
Patricia Suarez, Dario Carpio, & Angel Sappa. (2023). Boosting Guided Super-Resolution Performance with Synthesized Images. In 17th International Conference on Signal-Image Technology & Internet-Based Systems (pp. 189–195).
Abstract: Guided image processing techniques are widely used for extracting information from a guiding image to aid in the processing of the guided one. These images may be sourced from different modalities, such as 2D and 3D, or different spectral bands, like visible and infrared. In the case of guided cross-spectral super-resolution, features from the two modal images are extracted and efficiently merged to migrate guidance information from one image, usually high-resolution (HR), toward the guided one, usually low-resolution (LR). Different approaches have been recently proposed focusing on the development of architectures for feature extraction and merging in the cross-spectral domains, but none of them care about the different nature of the given images. This paper focuses on the specific problem of guided thermal image super-resolution, where an LR thermal image is enhanced by an HR visible spectrum image. To improve existing guided super-resolution techniques, a novel scheme is proposed that maps the original guiding information to a thermal image-like representation that is similar to the output. Experimental results evaluating five different approaches demonstrate that the best results are achieved when the guiding and guided images share the same domain.
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, C. Malagelada, & Petia Radeva. (2006). A Machine Learning framework using SOMs: Applications in the Intestinal Motility Assessment. In J.P. Martinez–Trinidad et al (Ed.), 11th Iberoamerican Congress on Pattern Recognition (Vol. 4225, 188–197). LNCS. Berlin-Heidelberg: Springer Verlag.
Abstract: Small Bowel Motility Assessment by means of Wireless Capsule Video Endoscopy constitutes a novel clinical methodology in which a capsule with a micro-camera attached to it is swallowed by the patient, emitting a RF signal which is recorded as a video of its trip throughout the gut. In order to overcome the main drawbacks associated with this technique -mainly related to the large amount of visualization time required-, our efforts have been focused on the development of a machine learning system, built up in sequential stages, which provides the specialists with the useful part of the video, rejecting those parts not valid for analysis. We successfully used Self Organized Maps in a general semi-supervised framework with the aim of tackling the different learning stages of our system. The analysis of the diverse types of images and the automatic detection of intestinal contractions is performed under the perspective of intestinal motility assessment in a clinical environment.
|
Jose Antonio Rodriguez, & Florent Perronnin. (2008). Local Gradient Histogram Features for Word Spotting in Unconstrained Handwritten Documents. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Recognition: Recent Advances and New Opportunities (Vol. 5046, 188–198). LNCS.
|
Jose Antonio Rodriguez, Gemma Sanchez, & Josep Llados. (2008). Categorization of Digital Ink Elements using Spectral Features. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Recognition: Recent Advances and New Opportunities (Vol. 5046, 188–198). LNCS. Springer–Verlag.
|
Marçal Rusiñol, Agnes Borras, & Josep Llados. (2010). Relational Indexing of Vectorial Primitives for Symbol Spotting in Line-Drawing Images. PRL - Pattern Recognition Letters, 31(3), 188–201.
Abstract: This paper presents a symbol spotting approach for indexing by content a database of line-drawing images. As line-drawings are digital-born documents designed by vectorial softwares, instead of using a pixel-based approach, we present a spotting method based on vector primitives. Graphical symbols are represented by a set of vectorial primitives which are described by an off-the-shelf shape descriptor. A relational indexing strategy aims to retrieve symbol locations into the target documents by using a combined numerical-relational description of 2D structures. The zones which are likely to contain the queried symbol are validated by a Hough-like voting scheme. In addition, a performance evaluation framework for symbol spotting in graphical documents is proposed. The presented methodology has been evaluated with a benchmarking set of architectural documents achieving good performance results.
Keywords: Document image analysis and recognition, Graphics recognition, Symbol spotting ,Vectorial representations, Line-drawings
|
Petia Radeva, & Joan Serrat. (1993). Rubber Snake: Implementation on Signed Distance Potential. In Vision Conference (pp. 187–194).
|
Mathieu Nicolas Delalandre, Ernest Valveny, Tony Pridmore, & Dimosthenis Karatzas. (2010). Generation of Synthetic Documents for Performance Evaluation of Symbol Recognition & Spotting Systems. IJDAR - International Journal on Document Analysis and Recognition, 13(3), 187–207.
Abstract: This paper deals with the topic of performance evaluation of symbol recognition & spotting systems. We propose here a new approach to the generation of synthetic graphics documents containing non-isolated symbols in a real context. This approach is based on the definition of a set of constraints that permit us to place the symbols on a pre-defined background according to the properties of a particular domain (architecture, electronics, engineering, etc.). In this way, we can obtain a large amount of images resembling real documents by simply defining the set of constraints and providing a few pre-defined backgrounds. As documents are synthetically generated, the groundtruth (the location and the label of every symbol) becomes automatically available. We have applied this approach to the generation of a large database of architectural drawings and electronic diagrams, which shows the flexibility of the system. Performance evaluation experiments of a symbol localization system show that our approach permits to generate documents with different features that are reflected in variation of localization results.
|
Albert Ali Salah, Theo Gevers, Nicu Sebe, & Alessandro Vinciarelli. (2011). Computer Vision for Ambient Intelligence. JAISE - Journal of Ambient Intelligence and Smart Environments, 3(3), 187–191.
|
Xavier Perez Sala, Cecilio Angulo, & Sergio Escalera. (2011). Biologically Inspired Turn Control in Robot Navigation. In 14th Congrès Català en Intel·ligencia Artificial (pp. 187–196).
Abstract: An exportable and robust system for turn control using only camera images is proposed for path execution in robot navigation. Robot motion information is extracted in the form of optical flow from SURF robust descriptors of consecutive frames in the image sequence. This information is used to compute the instantaneous rotation angle. Finally, control loop is closed correcting robot displacements when it is requested for a turn command. The proposed system has been successfully tested on the four-legged Sony Aibo robot.
|
Marçal Rusiñol, K. Bertet, Jean-Marc Ogier, & Josep Llados. (2010). Symbol Recognition Using a Concept Lattice of Graphical Patterns. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 187–198). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we propose a new approach to recognize symbols by the use of a concept lattice. We propose to build a concept lattice in terms of graphical patterns. Each model symbol is decomposed in a set of composing graphical patterns taken as primitives. Each one of these primitives is described by boundary moment invariants. The obtained concept lattice relates which symbolic patterns compose a given graphical symbol. A Hasse diagram is derived from the context and is used to recognize symbols affected by noise. We present some preliminary results over a variation of the dataset of symbols from the GREC 2005 symbol recognition contest.
|