|
Christophe Rigaud, Dimosthenis Karatzas, Joost Van de Weijer, Jean-Christophe Burie, & Jean-Marc Ogier. (2013). Automatic text localisation in scanned comic books. In Proceedings of the International Conference on Computer Vision Theory and Applications (pp. 814–819).
Abstract: Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent document understanding enable direct content-based search as opposed to metadata only search (e.g. album title or author name). Few studies have been done in this direction. In this work we detail a novel approach for the automatic text localization in scanned comics book pages, an essential step towards a fully automatic comics book understanding. We focus on speech text as it is semantically important and represents the majority of the text present in comics. The approach is compared with existing methods of text localization found in the literature and results are presented.
Keywords: Text localization; comics; text/graphic separation; complex background; unstructured document
|
|
|
Zeynep Yucel, Albert Ali Salah, Çetin Meriçli, Tekin Meriçli, Roberto Valenti, & Theo Gevers. (2013). Joint Attention by Gaze Interpolation and Saliency. T-CIBER - IEEE Transactions on cybernetics, 829–842.
Abstract: Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2013). Out-of-Sample Embedding for Manifold Learning Applied to Face Recognition. In IEEE International Workshop on Analysis and Modeling of Faces and Gestures (pp. 862–868).
Abstract: Manifold learning techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data---the out-of-sample problem. For the first aspect, the proposed schemes were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only reached for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that sparse coding theory not only serves for automatic graph reconstruction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the k-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on four public face databases. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes.
|
|
|
Jon Almazan, Albert Gordo, Alicia Fornes, & Ernest Valveny. (2013). Handwritten Word Spotting with Corrected Attributes. In 15th IEEE International Conference on Computer Vision (pp. 1017–1024).
Abstract: We propose an approach to multi-writer word spotting, where the goal is to find a query word in a dataset comprised of document images. We propose an attributes-based approach that leads to a low-dimensional, fixed-length representation of the word images that is fast to compute and, especially, fast to compare. This approach naturally leads to an unified representation of word images and strings, which seamlessly allows one to indistinctly perform query-by-example, where the query is an image, and query-by-string, where the query is a string. We also propose a calibration scheme to correct the attributes scores based on Canonical Correlation Analysis that greatly improves the results on a challenging dataset. We test our approach on two public datasets showing state-of-the-art results.
|
|
|
Jon Almazan, Alicia Fornes, & Ernest Valveny. (2013). A Deformable HOG-based Shape Descriptor. In 12th International Conference on Document Analysis and Recognition (pp. 1022–1026).
Abstract: In this paper we deal with the problem of recognizing handwritten shapes. We present a new deformable feature extraction method that adapts to the shape to be described, dealing in this way with the variability introduced in the handwriting domain. It consists in a selection of the regions that best define the shape to be described, followed by the computation of histograms of oriented gradients-based features over these points. Our results significantly outperform other descriptors in the literature for the task of hand-drawn shape recognition and handwritten word retrieval
|
|
|
Anjan Dutta, Josep Llados, Horst Bunke, & Umapada Pal. (2013). Near Convex Region Adjacency Graph and Approximate Neighborhood String Matching for Symbol Spotting in Graphical Documents. In 12th International Conference on Document Analysis and Recognition (pp. 1078–1082).
Abstract: This paper deals with a subgraph matching problem in Region Adjacency Graph (RAG) applied to symbol spotting in graphical documents. RAG is a very important, efficient and natural way of representing graphical information with a graph but this is limited to cases where the information is well defined with perfectly delineated regions. What if the information we are interested in is not confined within well defined regions? This paper addresses this particular problem and solves it by defining near convex grouping of oriented line segments which results in near convex regions. Pure convexity imposes hard constraints and can not handle all the cases efficiently. Hence to solve this problem we have defined a new type of convexity of regions, which allows convex regions to have concavity to some extend. We call this kind of regions Near Convex Regions (NCRs). These NCRs are then used to create the Near Convex Region Adjacency Graph (NCRAG) and with this representation we have formulated the problem of symbol spotting in graphical documents as a subgraph matching problem. For subgraph matching we have used the Approximate Edit Distance Algorithm (AEDA) on the neighborhood string, which starts working after finding a key node in the input or target graph and iteratively identifies similar nodes of the query graph in the neighborhood of the key node. The experiments are performed on artificial, real and distorted datasets.
|
|
|
Marçal Rusiñol, T.Benkhelfallah, & V. Poulain d'Andecy. (2013). Field Extraction from Administrative Documents by Incremental Structural Templates. In 12th International Conference on Document Analysis and Recognition (pp. 1100–1104).
Abstract: In this paper we present an incremental framework aimed at extracting field information from administrative document images in the context of a Digital Mail-room scenario. Given a single training sample in which the user has marked which fields have to be extracted from a particular document class, a document model representing structural relationships among words is built. This model is incrementally refined as the system processes more and more documents from the same class. A reformulation of the tf-idf statistic scheme allows to adjust the importance weights of the structural relationships among words. We report in the experimental section our results obtained with a large dataset of real invoices.
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2013). Mechanistic analysis of the function of agonists and allosteric modulators: Reconciling two-state and operational models. BJP - British Journal of Pharmacology, 169(6), 1189–202.
Abstract: Two-state and operational models of both agonism and allosterism are compared to identify and characterize common pharmacological parameters. To account for the receptor-dependent basal response, constitutive receptor activity is considered in the operational models. By arranging two-state models as the fraction of active receptors and operational models as the fractional response relative to the maximum effect of the system, a one-by-one correspondence between parameters is found. The comparative analysis allows a better understanding of complex allosteric interactions. In particular, the inclusion of constitutive receptor activity in the operational model of allosterism allows the characterization of modulators able to lower the basal response of the system; that is, allosteric modulators with negative intrinsic efficacy. Theoretical simulations and overall goodness of fit of the models to simulated data suggest that it is feasible to apply the models to experimental data and constitute one step forward in receptor theory formalism.
|
|
|
Christophe Rigaud, Dimosthenis Karatzas, Joost Van de Weijer, Jean-Christophe Burie, & Jean-Marc Ogier. (2013). An active contour model for speech balloon detection in comics. In 12th International Conference on Document Analysis and Recognition (pp. 1240–1244).
Abstract: Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent comic book understanding would enable a variety of new applications, including content-based retrieval and content retargeting. Document understanding in this domain is challenging as comics are semi-structured documents, combining semantically important graphical and textual parts. Few studies have been done in this direction. In this work we detail a novel approach for closed and non-closed speech balloon localization in scanned comic book pages, an essential step towards a fully automatic comic book understanding. The approach is compared with existing methods for closed balloon localization found in the literature and results are presented.
|
|
|
Lluis Pere de las Heras, David Fernandez, Ernest Valveny, Josep Llados, & Gemma Sanchez. (2013). Unsupervised wall detector in architectural floor plan. In 12th International Conference on Document Analysis and Recognition (pp. 1245–1249).
Abstract: Wall detection in floor plans is a crucial step in a complete floor plan recognition system. Walls define the main structure of buildings and convey essential information for the detection of other structural elements. Nevertheless, wall segmentation is a difficult task, mainly because of the lack of a standard graphical notation. The existing approaches are restricted to small group of similar notations or require the existence of pre-annotated corpus of input images to learn each new notation. In this paper we present an automatic wall segmentation system, with the ability to handle completely different notations without the need of any annotated dataset. It only takes advantage of the general knowledge that walls are a repetitive element, naturally distributed within the plan and commonly modeled by straight parallel lines. The method has been tested on four datasets of real floor plans with different notations, and compared with the state-of-the-art. The results show its suitability for different graphical notations, achieving higher recall rates than the rest of the methods while keeping a high average precision.
|
|
|
Sergio Vera, Debora Gil, Agnes Borras, Marius George Linguraru, & Miguel Angel Gonzalez Ballester. (2013). Geometric Steerable Medial Maps. MVA - Machine Vision and Applications, 24(6), 1255–1266.
Abstract: In order to provide more intuitive and easily interpretable representations of complex shapes/organs, medial manifolds should reach a compromise between simplicity in geometry and capability for restoring the anatomy/shape of the organ/volume. Existing morphological methods show excellent results when applied to 2D objects, but their quality drops across dimensions.
This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial manifolds that avoids degenerated medial axis segments. Second, we introduce a continuous operator for accurate and efficient computation of medial structures of arbitrary dimension. We evaluate quantitatively the performance of our method with respect to existing approaches, by applying them to syn- thetic shapes of known medial geometry. We also show its higher performance for medical imaging applications in terms of simplicity of medial structures and capability for reconstructing the anatomical volume.
Keywords: Medial Representations ,Medial Manifolds Comparation , Surface , Reconstruction
|
|
|
L. Rothacker, Marçal Rusiñol, & G.A. Fink. (2013). Bag-of-Features HMMs for segmentation-free word spotting in handwritten documents. In 12th International Conference on Document Analysis and Recognition (pp. 1305–1309).
Abstract: Recent HMM-based approaches to handwritten word spotting require large amounts of learning samples and mostly rely on a prior segmentation of the document. We propose to use Bag-of-Features HMMs in a patch-based segmentation-free framework that are estimated by a single sample. Bag-of-Features HMMs use statistics of local image feature representatives. Therefore they can be considered as a variant of discrete HMMs allowing to model the observation of a number of features at a point in time. The discrete nature enables us to estimate a query model with only a single example of the query provided by the user. This makes our method very flexible with respect to the availability of training data. Furthermore, we are able to outperform state-of-the-art results on the George Washington dataset.
|
|
|
Miguel Reyes, Albert Clapes, Jose Ramirez, Juan R Revilla, & Sergio Escalera. (2013). Automatic Digital Biometry Analysis based on Depth Maps. COMPUTIND - Computers in Industry, 64(9), 1316–1325.
Abstract: World Health Organization estimates that 80% of the world population is affected by back-related disorders during his life. Current practices to analyze musculo-skeletal disorders (MSDs) are expensive, subjective, and invasive. In this work, we propose a tool for static body posture analysis and dynamic range of movement estimation of the skeleton joints based on 3D anthropometric information from multi-modal data. Given a set of keypoints, RGB and depth data are aligned, depth surface is reconstructed, keypoints are matched, and accurate measurements about posture and spinal curvature are computed. Given a set of joints, range of movement measurements is also obtained. Moreover, gesture recognition based on joint movements is performed to look for the correctness in the development of physical exercises. The system shows high precision and reliable measurements, being useful for posture reeducation purposes to prevent MSDs, as well as tracking the posture evolution of patients in rehabilitation treatments.
Keywords: Multi-modal data fusion; Depth maps; Posture analysis; Anthropometric data; Musculo-skeletal disorders; Gesture analysis
|
|
|
Albert Gordo, Alicia Fornes, & Ernest Valveny. (2013). Writer identification in handwritten musical scores with bags of notes. PR - Pattern Recognition, 46(5), 1337–1345.
Abstract: Writer Identification is an important task for the automatic processing of documents. However, the identification of the writer in graphical documents is still challenging. In this work, we adapt the Bag of Visual Words framework to the task of writer identification in handwritten musical scores. A vanilla implementation of this method already performs comparably to the state-of-the-art. Furthermore, we analyze the effect of two improvements of the representation: a Bhattacharyya embedding, which improves the results at virtually no extra cost, and a Fisher Vector representation that very significantly improves the results at the cost of a more complex and costly representation. Experimental evaluation shows results more than 20 points above the state-of-the-art in a new, challenging dataset.
|
|
|
Bhaskar Chakraborty, Jordi Gonzalez, & Xavier Roca. (2013). Large scale continuous visual event recognition using max-margin Hough transformation framework. CVIU - Computer Vision and Image Understanding, 117(10), 1356–1368.
Abstract: In this paper we propose a novel method for continuous visual event recognition (CVER) on a large scale video dataset using max-margin Hough transformation framework. Due to high scalability, diverse real environmental state and wide scene variability direct application of action recognition/detection methods such as spatio-temporal interest point (STIP)-local feature based technique, on the whole dataset is practically infeasible. To address this problem, we apply a motion region extraction technique which is based on motion segmentation and region clustering to identify possible candidate “event of interest” as a preprocessing step. On these candidate regions a STIP detector is applied and local motion features are computed. For activity representation we use generalized Hough transform framework where each feature point casts a weighted vote for possible activity class centre. A max-margin frame work is applied to learn the feature codebook weight. For activity detection, peaks in the Hough voting space are taken into account and initial event hypothesis is generated using the spatio-temporal information of the participating STIPs. For event recognition a verification Support Vector Machine is used. An extensive evaluation on benchmark large scale video surveillance dataset (VIRAT) and as well on a small scale benchmark dataset (MSR) shows that the proposed method is applicable on a wide range of continuous visual event recognition applications having extremely challenging conditions.
|
|