|
Antonio Clavelli, Dimosthenis Karatzas, Josep Llados, Mario Ferraro, & Giuseppe Boccignone. (2013). Towards Modelling an Attention-Based Text Localization Process. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 296–303). LNCS. Springer Berlin Heidelberg.
Abstract: This note introduces a visual attention model of text localization in real-world scenes. The core of the model built upon the proto-object concept is discussed. It is shown how such dynamic mid-level representation of the scene can be derived in the framework of an action-perception loop engaging salience, text information value computation, and eye guidance mechanisms.
Preliminary results that compare model generated scanpaths with those eye-tracked from human subjects are presented.
Keywords: text localization; visual attention; eye guidance
|
|
|
Anjan Dutta, Josep Llados, & Umapada Pal. (2013). A symbol spotting approach in graphical documents by hashing serialized graphs. PR - Pattern Recognition, 46(3), 752–768.
Abstract: In this paper we propose a symbol spotting technique in graphical documents. Graphs are used to represent the documents and a (sub)graph matching technique is used to detect the symbols in them. We propose a graph serialization to reduce the usual computational complexity of graph matching. Serialization of graphs is performed by computing acyclic graph paths between each pair of connected nodes. Graph paths are one-dimensional structures of graphs which are less expensive in terms of computation. At the same time they enable robust localization even in the presence of noise and distortion. Indexing in large graph databases involves a computational burden as well. We propose a graph factorization approach to tackle this problem. Factorization is intended to create a unified indexed structure over the database of graphical documents. Once graph paths are extracted, the entire database of graphical documents is indexed in hash tables by locality sensitive hashing (LSH) of shape descriptors of the paths. The hashing data structure aims to execute an approximate k-NN search in a sub-linear time. We have performed detailed experiments with various datasets of line drawings and compared our method with the state-of-the-art works. The results demonstrate the effectiveness and efficiency of our technique.
Keywords: Symbol spotting; Graphics recognition; Graph matching; Graph serialization; Graph factorization; Graph paths; Hashing
|
|
|
Anjan Dutta, Josep Llados, Horst Bunke, & Umapada Pal. (2013). Near Convex Region Adjacency Graph and Approximate Neighborhood String Matching for Symbol Spotting in Graphical Documents. In 12th International Conference on Document Analysis and Recognition (pp. 1078–1082).
Abstract: This paper deals with a subgraph matching problem in Region Adjacency Graph (RAG) applied to symbol spotting in graphical documents. RAG is a very important, efficient and natural way of representing graphical information with a graph but this is limited to cases where the information is well defined with perfectly delineated regions. What if the information we are interested in is not confined within well defined regions? This paper addresses this particular problem and solves it by defining near convex grouping of oriented line segments which results in near convex regions. Pure convexity imposes hard constraints and can not handle all the cases efficiently. Hence to solve this problem we have defined a new type of convexity of regions, which allows convex regions to have concavity to some extend. We call this kind of regions Near Convex Regions (NCRs). These NCRs are then used to create the Near Convex Region Adjacency Graph (NCRAG) and with this representation we have formulated the problem of symbol spotting in graphical documents as a subgraph matching problem. For subgraph matching we have used the Approximate Edit Distance Algorithm (AEDA) on the neighborhood string, which starts working after finding a key node in the input or target graph and iteratively identifies similar nodes of the query graph in the neighborhood of the key node. The experiments are performed on artificial, real and distorted datasets.
|
|
|
Anjan Dutta, Josep Llados, Horst Bunke, & Umapada Pal. (2013). A Product graph based method for dual subgraph matching applied to symbol spotting. In 10th IAPR International Workshop on Graphics Recognition.
Abstract: Product graph has been shown to be an efficient way for matching subgraphs. This paper reports the extension of the product graph methodology for subgraph matching applied to symbol spotting in graphical documents. This paper focuses on the two major limitations of the previous version of product graph: (1) Spurious nodes and edges in the graph representation and (2) Inefficient node and edge attributes. To deal with noisy information of vectorized graphical documents, we consider a dual graph representation on the original graph representing the graphical information and the product graph is computed between the dual graphs of the query graphs and the input graph.
The dual graph with redundant edges is helpful for efficient and tolerating encoding of the structural information of the graphical documents. The adjacency matrix of the product graph locates similar path information of two graphs and exponentiating the adjacency matrix finds similar paths of greater lengths. Nodes joining similar paths between two graphs are found by combining different exponentials of adjacency matrices. An experimental investigation reveals that the recall obtained by this approach is quite encouraging.
|
|
|
Angel Sappa, & Jordi Vitria. (2013). Multimodal Interaction in Image and Video Applications (Vol. 48). Springer Berlin Heidelberg.
Abstract: Book Series Intelligent Systems Reference Library
|
|
|
Andrew Nolan, Daniel Serrano, Aura Hernandez-Sabate, Daniel Ponsa, & Antonio Lopez. (2013). Obstacle mapping module for quadrotors on outdoor Search and Rescue operations. In International Micro Air Vehicle Conference and Flight Competition.
Abstract: Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
Keywords: UAV
|
|
|
Andreas Møgelmose, Chris Bahnsen, Thomas B. Moeslund, Albert Clapes, & Sergio Escalera. (2013). Tri-modal Person Re-identification with RGB, Depth and Thermal Features. In 9th IEEE Workshop on Perception beyond the visible Spectrum, Computer Vision and Pattern Recognition (pp. 301–307).
Abstract: Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios.
|
|
|
Andreas Fischer, Volkmar Frinken, Horst Bunke, & Ching Y. Suen. (2013). Improving HMM-Based Keyword Spotting with Character Language Models. In 12th International Conference on Document Analysis and Recognition (pp. 506–510).
Abstract: Facing high error rates and slow recognition speed for full text transcription of unconstrained handwriting images, keyword spotting is a promising alternative to locate specific search terms within scanned document images. We have previously proposed a learning-based method for keyword spotting using character hidden Markov models that showed a high performance when compared with traditional template image matching. In the lexicon-free approach pursued, only the text appearance was taken into account for recognition. In this paper, we integrate character n-gram language models into the spotting system in order to provide an additional language context. On the modern IAM database as well as the historical George Washington database, we demonstrate that character language models significantly improve the spotting performance.
|
|
|
Andreas Fischer, Ching Y. Suen, Volkmar Frinken, Kaspar Riesen, & Horst Bunke. (2013). A Fast Matching Algorithm for Graph-Based Handwriting Recognition. In 9th IAPR – TC15 Workshop on Graph-based Representation in Pattern Recognition (Vol. 7877, pp. 194–203). LNCS. Springer Berlin Heidelberg.
Abstract: The recognition of unconstrained handwriting images is usually based on vectorial representation and statistical classification. Despite their high representational power, graphs are rarely used in this field due to a lack of efficient graph-based recognition methods. Recently, graph similarity features have been proposed to bridge the gap between structural representation and statistical classification by means of vector space embedding. This approach has shown a high performance in terms of accuracy but had shortcomings in terms of computational speed. The time complexity of the Hungarian algorithm that is used to approximate the edit distance between two handwriting graphs is demanding for a real-world scenario. In this paper, we propose a faster graph matching algorithm which is derived from the Hausdorff distance. On the historical Parzival database it is demonstrated that the proposed method achieves a speedup factor of 12.9 without significant loss in recognition accuracy.
|
|
|
Anastasios Doulamis, Nikolaos Doulamis, Marco Bertini, Jordi Gonzalez, & Thomas B. Moeslund. (2013). Analysis and Retrieval of Tracked Events and Motion in Imagery Streams.
|
|
|
Alvaro Cepero, Albert Clapes, & Sergio Escalera. (2013). Quantitative analysis of non-verbal communication for competence analysis. In 16th Catalan Conference on Artificial Intelligence (Vol. 256, pp. 105–114).
|
|
|
Alicia Fornes, Xavier Otazu, & Josep Llados. (2013). Show through cancellation and image enhancement by multiresolution contrast processing. In 12th International Conference on Document Analysis and Recognition (pp. 200–204).
Abstract: Historical documents suffer from different types of degradation and noise such as background variation, uneven illumination or dark spots. In case of double-sided documents, another common problem is that the back side of the document usually interferes with the front side because of the transparency of the document or ink bleeding. This effect is called the show through phenomenon. Many methods are developed to solve these problems, and in the case of show-through, by scanning and matching both the front and back sides of the document. In contrast, our approach is designed to use only one side of the scanned document. We hypothesize that show-trough are low contrast components, while foreground components are high contrast ones. A Multiresolution Contrast (MC) decomposition is presented in order to estimate the contrast of features at different spatial scales. We cancel the show-through phenomenon by thresholding these low contrast components. This decomposition is also able to enhance the image removing shadowed areas by weighting spatial scales. Results show that the enhanced images improve the readability of the documents, allowing scholars both to recover unreadable words and to solve ambiguities.
|
|
|
Alex Pardo, Albert Clapes, Sergio Escalera, & Oriol Pujol. (2013). Actions in Context: System for people with Dementia. In 2nd International Workshop on Citizen Sensor Networks (Citisen2013) at the European Conference on Complex Systems (pp. 3–14). Springer International Publishing.
Abstract: In the next forty years, the number of people living with dementia is expected to triple. In the last stages, people affected by this disease become dependent. This hinders the autonomy of the patient and has a huge social impact in time, money and effort. Given this scenario, we propose an ubiquitous system capable of recognizing daily specific actions. The system fuses and synchronizes data obtained from two complementary modalities – ambient and egocentric. The ambient approach consists in a fixed RGB-Depth camera for user and object recognition and user-object interaction, whereas the egocentric point of view is given by a personal area network (PAN) formed by a few wearable sensors and a smartphone, used for gesture recognition. The system processes multi-modal data in real-time, performing paralleled task recognition and modality synchronization, showing high performance recognizing subjects, objects, and interactions, showing its reliability to be applied in real case scenarios.
Keywords: Multi-modal data Fusion; Computer vision; Wearable sensors; Gesture recognition; Dementia
|
|
|
Albert Gordo, Marçal Rusiñol, Dimosthenis Karatzas, & Andrew Bagdanov. (2013). Document Classification and Page Stream Segmentation for Digital Mailroom Applications. In 12th International Conference on Document Analysis and Recognition (pp. 621–625).
Abstract: In this paper we present a method for the segmentation of continuous page streams into multipage documents and the simultaneous classification of the resulting documents. We first present an approach to combine the multiple pages of a document into a single feature vector that represents the whole document. Despite its simplicity and low computational cost, the proposed representation yields results comparable to more complex methods in multipage document classification tasks. We then exploit this representation in the context of page stream segmentation. The most plausible segmentation of a page stream into a sequence of multipage documents is obtained by optimizing a statistical model that represents the probability of each segmented multipage document belonging to a particular class. Experimental results are reported on a large sample of real administrative multipage documents.
|
|
|
Albert Gordo, Florent Perronnin, & Ernest Valveny. (2013). Large-scale document image retrieval and classification with runlength histograms and binary embeddings. PR - Pattern Recognition, 46(7), 1898–1905.
Abstract: We present a new document image descriptor based on multi-scale runlength
histograms. This descriptor does not rely on layout analysis and can be
computed efficiently. We show how this descriptor can achieve state-of-theart
results on two very different public datasets in classification and retrieval
tasks. Moreover, we show how we can compress and binarize these descriptors
to make them suitable for large-scale applications. We can achieve state-ofthe-
art results in classification using binary descriptors of as few as 16 to 64
bits.
Keywords: visual document descriptor; compression; large-scale; retrieval; classification
|
|