Marçal Rusiñol, & Josep Llados. (2012). The Role of the Users in Handwritten Word Spotting Applications: Query Fusion and Relevance Feedback. In 13th International Conference on Frontiers in Handwriting Recognition (pp. 55–60).
Abstract: In this paper we present the importance of including the user in the loop in a handwritten word spotting framework. Several off-the-shelf query fusion and relevance feedback strategies have been tested in the handwritten word spotting context. The increase in terms of precision when the user is included in the loop is assessed using two datasets of historical handwritten documents and a baseline word spotting approach based on a bag-of-visual-words model.
|
Volkmar Frinken, Markus Baumgartner, Andreas Fischer, & Horst Bunke. (2012). Semi-Supervised Learning for Cursive Handwriting Recognition using Keyword Spotting. In 13th International Conference on Frontiers in Handwriting Recognition (pp. 49–54).
Abstract: State-of-the-art handwriting recognition systems are learning-based systems that require large sets of training data. The creation of training data, and consequently the creation of a well-performing recognition system, requires therefore a substantial amount of human work. This can be reduced with semi-supervised learning, which uses unlabeled text lines for training as well. Current approaches estimate the correct transcription of the unlabeled data via handwriting recognition which is not only extremely demanding as far as computational costs are concerned but also requires a good model of the target language. In this paper, we propose a different approach that makes use of keyword spotting, which is significantly faster and does not need any language model. In a set of experiments we demonstrate its superiority over existing approaches.
|
Emanuel Indermühle, Volkmar Frinken, & Horst Bunke. (2012). Mode Detection in Online Handwritten Documents using BLSTM Neural Networks. In 13th International Conference on Frontiers in Handwriting Recognition (pp. 302–307).
Abstract: Mode detection in online handwritten documents refers to the process of distinguishing different types of contents, such as text, formulas, diagrams, or tables, one from another. In this paper a new approach to mode detection is proposed that uses bidirectional long-short term memory (BLSTM) neural networks. The BLSTM neural network is a novel type of recursive neural network that has been successfully applied in speech and handwriting recognition. In this paper we show that it has the potential to significantly outperform traditional methods for mode detection, which are usually based on stroke classification. As a further advantage over previous approaches, the proposed system is trainable and does not rely on user-defined heuristics. Moreover, it can be easily adapted to new or additional types of modes by just providing the system with new training data.
|
Ignasi Rius, Dani Rowe, Jordi Gonzalez, & Xavier Roca. (2005). 3D Action Modeling and Reconstruction for 2D Human Body Tracking.
|
Dani Rowe, Ignasi Rius, Jordi Gonzalez, & Juan J. Villanueva. (2005). Improving Tracking by Handling Occlusions.
|
Quan-sen Sun, Zhong Jin, Pheng-ann Heng, & De-shen Xia. (2005). A novel feature fusion method based on partial least squares regression. In Pattern Recognition and Data Mining, Lecture Notes in Computer Science, 3686: 268–277.
|
Lluis Pere de las Heras, Joan Mas, Gemma Sanchez, & Ernest Valveny. (2011). Wall Patch-Based Segmentation in Architectural Floorplans. In 11th International Conference on Document Analysis and Recognition (pp. 1270–1274).
Abstract: Segmentation of architectural floor plans is a challenging task, mainly because of the large variability in the notation between different plans. In general, traditional techniques, usually based on analyzing and grouping structural primitives obtained by vectorization, are only able to handle a reduced range of similar notations. In this paper we propose an alternative patch-based segmentation approach working at pixel level, without need of vectorization. The image is divided into a set of patches and a set of features is extracted for every patch. Then, each patch is assigned to a visual word of a previously learned vocabulary and given a probability of belonging to each class of objects. Finally, a post-process assigns the final label for every pixel. This approach has been applied to the detection of walls on two datasets of architectural floor plans with different notations, achieving high accuracy rates.
|
Fadi Dornaika, & Angel Sappa. (2005). Appearance-based 3D Face Tracker: An Evaluation Study.
|
Fadi Dornaika, & Franck Davoine. (2005). Simultaneous Facial Action Tracking and Expression Recognition using a Particle Filter.
|
Zhong Jin, Jing-Yu Yang, & Zhen Lou. (2005). A luminance-conditional distribution model of skin color information.
|
Sergio Escalera, Petia Radeva, Jordi Vitria, Xavier Baro, & Bogdan Raducanu. (2010). Modelling and Analyzing Multimodal Dyadic Interactions Using Social Networks. In 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction..
Abstract: Social network analysis became a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from
multimodal dyadic interactions. First, speech detection is performed through an audio/visual fusion scheme based on stacked sequential learning. In the audio domain, speech is detected through clusterization of audio features. Clusters
are modelled by means of an One-state Hidden Markov Model containing a diagonal covariance Gaussian Mixture Model. In the visual domain, speech detection is performed through differential-based feature extraction from the segmented
mouth region, and a dynamic programming matching procedure. Second, in order to model the dyadic interactions, we employed the Influence Model whose states
encode the previous integrated audio/visual data. Third, the social network is extracted based on the estimated influences. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The results
are reported both in terms of accuracy of the audio/visual data fusion and centrality measures used to characterize the social network.
Keywords: Social interaction; Multimodal fusion, Influence model; Social network analysis
|
Aura Hernandez-Sabate, David Rotger, & Debora Gil. (2008). Image-based ECG sampling of IVUS sequences. In Proc. IEEE Ultrasonics Symp. IUS 2008 (pp. 1330–1333).
Abstract: Longitudinal motion artifacts in IntraVascular UltraSound (IVUS) sequences hinders a properly 3D reconstruction and vessel measurements. Most of current techniques base on the ECG signal to obtain a gated pullback without the longitudinal artifact by using a specific hardware or the ECG signal itself. The potential of IVUS images processing for phase retrieval still remains little explored. In this paper, we present a fast forward image-based algorithm to approach ECG sampling. Inspired on the fact that maximum and minimum lumen areas are related to end-systole and end-diastole, our cardiac phase retrieval is based on the analysis of tissue density of mass along the sequence. The comparison between automatic and manual phase retrieval (0.07 ± 0.07 mm. of error) encourages a deep validation contrasting with ECG signals.
Keywords: Longitudinal Motion; Image-based ECG-gating; Fourier analysis
|
Jose Manuel Alvarez, & Antonio Lopez. (2008). Novel Index for Objective Evaluation of Road Detection Algorithms. In Intelligent Transportation Systems. 11th International IEEE Conference on, (815–820).
|
Marçal Rusiñol, David Aldavert, Ricardo Toledo, & Josep Llados. (2011). Browsing Heterogeneous Document Collections by a Segmentation-Free Word Spotting Method. In 11th International Conference on Document Analysis and Recognition (pp. 63–67).
Abstract: In this paper, we present a segmentation-free word spotting method that is able to deal with heterogeneous document image collections. We propose a patch-based framework where patches are represented by a bag-of-visual-words model powered by SIFT descriptors. A later refinement of the feature vectors is performed by applying the latent semantic indexing technique. The proposed method performs well on both handwritten and typewritten historical document images. We have also tested our method on documents written in non-Latin scripts.
|
Volkmar Frinken, Andreas Fischer, Horst Bunke, & Alicia Fornes. (2011). Co-training for Handwritten Word Recognition. In 11th International Conference on Document Analysis and Recognition (pp. 314–318).
Abstract: To cope with the tremendous variations of writing styles encountered between different individuals, unconstrained automatic handwriting recognition systems need to be trained on large sets of labeled data. Traditionally, the training data has to be labeled manually, which is a laborious and costly process. Semi-supervised learning techniques offer methods to utilize unlabeled data, which can be obtained cheaply in large amounts in order, to reduce the need for labeled data. In this paper, we propose the use of Co-Training for improving the recognition accuracy of two weakly trained handwriting recognition systems. The first one is based on Recurrent Neural Networks while the second one is based on Hidden Markov Models. On the IAM off-line handwriting database we demonstrate a significant increase of the recognition accuracy can be achieved with Co-Training for single word recognition.
|