Alicia Fornes, Josep Llados, Gemma Sanchez, Xavier Otazu, & Horst Bunke. (2010). A Combination of Features for Symbol-Independent Writer Identification in Old Music Scores. IJDAR - International Journal on Document Analysis and Recognition, 13(4), 243–259.
Abstract: The aim of writer identification is determining the writer of a piece of handwriting from a set of writers. In this paper, we present an architecture for writer identification in old handwritten music scores. Even though an important amount of music compositions contain handwritten text, the aim of our work is to use only music notation to determine the author. The main contribution is therefore the use of features extracted from graphical alphabets. Our proposal consists in combining the identification results of two different approaches, based on line and textural features. The steps of the ensemble architecture are the following. First of all, the music sheet is preprocessed for removing the staff lines. Then, music lines and texture images are generated for computing line features and textural features. Finally, the classification results are combined for identifying the writer. The proposed method has been tested on a database of old music scores from the seventeenth to nineteenth centuries, achieving a recognition rate of about 92% with 20 writers.
|
Sergio Escalera, Oriol Pujol, Eric Laciar, Jordi Vitria, Esther Pueyo, & Petia Radeva. (2010). Classification of Coronary Damage in Chronic Chagasic Patients. In M. H.(eds) V. Sgurev (Ed.), Intelligent Systems – From Theory to Practice. Studies in Computational Intelligence (Vol. 299, pp. 461–478). Springer-Verlag.
Abstract: Post Conference IEEE-IS 2008
The Chagas’ disease is endemic in all Latin America, affecting millions of people in the continent. In order to diagnose and treat the chagas’ disease, it is important to detect and measure the coronary damage of the patient. In this paper,
we analyze and categorize patients into different groups based on the coronary damage produced by the disease. Based on the features of the heart cycle extracted using high resolution ECG, a multi-class scheme of Error-Correcting Output Codes (ECOC)is formulated and successfully applied. The results show that the proposed scheme obtains significant performance improvements compared to previous works and state-of-the-art ECOC designs.
Keywords: Chagas disease; Error-Correcting Output Codes; High resolution ECG; Decoding
|
Josep Llados, Ernest Valveny, Gemma Sanchez, & Enric Marti. (2002). Symbol recognition: current advances and perspectives. In Dorothea Blostein and Young- Bin Kwon (Ed.), Graphics Recognition Algorithms And Applications (Vol. 2390, pp. 104–128). LNCS. Springer-Verlag.
Abstract: The recognition of symbols in graphic documents is an intensive research activity in the community of pattern recognition and document analysis. A key issue in the interpretation of maps, engineering drawings, diagrams, etc. is the recognition of domain dependent symbols according to a symbol database. In this work we first review the most outstanding symbol recognition methods from two different points of view: application domains and pattern recognition methods. In the second part of the paper, open and unaddressed problems involved in symbol recognition are described, analyzing their current state of art and discussing future research challenges. Thus, issues such as symbol representation, matching, segmentation, learning, scalability of recognition methods and performance evaluation are addressed in this work. Finally, we discuss the perspectives of symbol recognition concerning to new paradigms such as user interfaces in handheld computers or document database and WWW indexing by graphical content.
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2012). Personalization and User Verification in Wearable Systems using Biometric Walking Patterns. PUC - Personal and Ubiquitous Computing, 16(5), 563–580.
Abstract: In this article, a novel technique for user’s authentication and verification using gait as a biometric unobtrusive pattern is proposed. The method is based on a two stages pipeline. First, a general activity recognition classifier is personalized for an specific user using a small sample of her/his walking pattern. As a result, the system is much more selective with respect to the new walking pattern. A second stage verifies whether the user is an authorized one or not. This stage is defined as a one-class classification problem. In order to solve this problem, a four-layer architecture is built around the geometric concept of convex hull. This architecture allows to improve robustness to outliers, modeling non-convex shapes, and to take into account temporal coherence information. Two different scenarios are proposed as validation with two different wearable systems. First, a custom high-performance wearable system is built and used in a free environment. A second dataset is acquired from an Android-based commercial device in a ‘wild’ scenario with rough terrains, adversarial conditions, crowded places and obstacles. Results on both systems and datasets are very promising, reducing the verification error rates by an order of magnitude with respect to the state-of-the-art technologies.
|
Jordi Vitria, Joao Sanchez, Miguel Raposo, & Mario Hernandez. (2011). Pattern Recognition and Image Analysis (J. Vitrià, J. Sanchez, M. Raposo, & M. Hernandez, Eds.) (Vol. 6669). Berlin: Springer-Verlag.
|
Jon Almazan, Ernest Valveny, & Alicia Fornes. (2011). Deforming the Blurred Shape Model for Shape Description and Recognition. In Jordi Vitria, Joao Miguel Raposo, & Mario Hernandez (Eds.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 1–8). LNCS. Berlin: Springer-Verlag.
Abstract: This paper presents a new model for the description and recognition of distorted shapes, where the image is represented by a pixel density distribution based on the Blurred Shape Model combined with a non-linear image deformation model. This leads to an adaptive structure able to capture elastic deformations in shapes. This method has been evaluated using thee different datasets where deformations are present, showing the robustness and good performance of the new model. Moreover, we show that incorporating deformation and flexibility, the new model outperforms the BSM approach when classifying shapes with high variability of appearance.
|
Maria Vanrell, Naila Murray, Robert Benavente, C. Alejandro Parraga, Xavier Otazu, & Ramon Baldrich. (2011). Perception Based Representations for Computational Colour. In Alain Trémeau S. T. Raimondo Schettini (Ed.), 3rd International Workshop on Computational Color Imaging (Vol. 6626, pp. 16–30). LNCS. Springer-Verlag.
Abstract: The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.
Keywords: colour perception, induction, naming, psychophysical data, saliency, segmentation
|
Patricia Marquez, Debora Gil, & Aura Hernandez-Sabate. (2012). A Complete Confidence Framework for Optical Flow. In Rita Cucchiara V. M. Andrea Fusiello (Ed.), 12th European Conference on Computer Vision – Workshops and Demonstrations (Vol. 7584, pp. 124–133). LNCS. Florence, Italy, October 7-13, 2012: Springer-Verlag.
Abstract: Medial representations are powerful tools for describing and parameterizing the volumetric shape of anatomical structures. Existing methods show excellent results when applied to 2D objects, but their quality drops across dimensions. This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial manifolds that avoid degenerated medial axis segments; second, we introduce an energy based method which performs independently of the dimension. We evaluate quantitatively the performance of our method with respect to existing approaches, by applying them to synthetic shapes of known medial geometry. Finally, we show results on shape representation of multiple abdominal organs, exploring the use of medial manifolds for the representation of multi-organ relations.
Keywords: Optical flow, confidence measures, sparsification plots, error prediction plots
|
Eloi Puertas, Sergio Escalera, & Oriol Pujol. (2015). Generalized Multi-scale Stacked Sequential Learning for Multi-class Classification. PAA - Pattern Analysis and Applications, 18(2), 247–261.
Abstract: In many classification problems, neighbor data labels have inherent sequential relationships. Sequential learning algorithms take benefit of these relationships in order to improve generalization. In this paper, we revise the multi-scale sequential learning approach (MSSL) for applying it in the multi-class case (MMSSL). We introduce the error-correcting output codesframework in the MSSL classifiers and propose a formulation for calculating confidence maps from the margins of the base classifiers. In addition, we propose a MMSSL compression approach which reduces the number of features in the extended data set without a loss in performance. The proposed methods are tested on several databases, showing significant performance improvement compared to classical approaches.
Keywords: Stacked sequential learning; Multi-scale; Error-correct output codes (ECOC); Contextual classification
|
Bogdan Raducanu, & Fadi Dornaika. (2013). Texture-independent recognition of facial expressions in image snapshots and videos. MVA - Machine Vision and Applications, 24(4), 811–820.
Abstract: This paper addresses the static and dynamic recognition of basic facial expressions. It has two main contributions. First, we introduce a view- and texture-independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker. We represent the learned facial actions associated with different facial expressions by time series. Second, we compare this dynamic scheme with a static one based on analyzing individual snapshots and show that the former performs better than the latter. We provide evaluations of performance using three subspace learning techniques: linear discriminant analysis, non-parametric discriminant analysis and support vector machines.
|
Carlo Gatta, Simone Balocco, Francesco Ciompi, R. Hemetsberger, Oriol Rodriguez-Leor, & Petia Radeva. (2010). Real-time gating of IVUS sequences based on motion blur analysis: Method and quantitative validation. In 13th international conference on Medical image computing and computer-assisted intervention (Vol. II, pp. 59–67). Springer-Verlag Berlin.
Abstract: Intravascular Ultrasound (IVUS) is an image-guiding technique for cardiovascular diagnostic, providing cross-sectional images of vessels. During the acquisition, the catheter is pulled back (pullback) at a constant speed in order to acquire spatially subsequent images of the artery. However, during this procedure, the heart twist produces a swinging fluctuation of the probe position along the vessel axis. In this paper we propose a real-time gating algorithm based on the analysis of motion blur variations during the IVUS sequence. Quantitative tests performed on an in-vitro ground truth data base shown that our method is superior to state of the art algorithms both in computational speed and accuracy.
|
F.Guirado, Ana Ripoll, C.Roig, Aura Hernandez-Sabate, & Emilio Luque. (2006). Exploiting Throughput for Pipeline Execution in Streaming Image Processing Applications. In UAB, E. N. W, & et al. (Eds.), Euro-Par 2006 Parallel Processing (Vol. 4128, pp. 1095–1105). Lecture Notes In Computer Science. Dresden, Germany (European Union): Springer-Verlag Berlin Heidelberg.
Abstract: There is a large range of image processing applications that act on an input sequence of image frames that are continuously received. Throughput is a key performance measure to be optimized when execu- ting them. In this paper we propose a new task replication methodology for optimizing throughput for an image processing application in the field of medicine. The results show that by applying the proposed methodo- logy we are able to achieve the desired throughput in all cases, in such a way that the input frames can be processed at any given rate.
Keywords: 12th International Euro–Par Conference
|
Aura Hernandez-Sabate, Debora Gil, David Roche, Monica M. S. Matsumoto, & Sergio S. Furuie. (2011). Inferring the Performance of Medical Imaging Algorithms. In Pedro Real, Daniel Diaz-Pernil, Helena Molina-Abril, Ainhoa Berciano, & Walter Kropatsch (Eds.), 14th International Conference on Computer Analysis of Images and Patterns (Vol. 6854, pp. 520–528). LNCS. Berlin: Springer-Verlag Berlin Heidelberg.
Abstract: Evaluation of the performance and limitations of medical imaging algorithms is essential to estimate their impact in social, economic or clinical aspects. However, validation of medical imaging techniques is a challenging task due to the variety of imaging and clinical problems involved, as well as, the difficulties for systematically extracting a reliable solely ground truth. Although specific validation protocols are reported in any medical imaging paper, there are still two major concerns: definition of standardized methodologies transversal to all problems and generalization of conclusions to the whole clinical data set.
We claim that both issues would be fully solved if we had a statistical model relating ground truth and the output of computational imaging techniques. Such a statistical model could conclude to what extent the algorithm behaves like the ground truth from the analysis of a sampling of the validation data set. We present a statistical inference framework reporting the agreement and describing the relationship of two quantities. We show its transversality by applying it to validation of two different tasks: contour segmentation and landmark correspondence.
Keywords: Validation, Statistical Inference, Medical Imaging Algorithms.
|
Miguel Angel Bautista, Oriol Pujol, Xavier Baro, & Sergio Escalera. (2011). Introducing the Separability Matrix for Error Correcting Output Codes Coding. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International conference on Multiple Classifier Systems (Vol. 6713, pp. 227–236). LNCS. Springer-Verlag Berlin Heidelberg.
Abstract: Error Correcting Output Codes (ECOC) have demonstrate to be a powerful tool for treating multi-class problems. Nevertheless, predefined ECOC designs may not benefit from Error-correcting principles for particular multi-class data. In this paper, we introduce the Separability matrix as a tool to study and enhance designs for ECOC coding. In addition, a novel problem-dependent coding design based on the Separability matrix is tested over a wide set of challenging multi-class problems, obtaining very satisfactory results.
|
Patricia Marquez, Debora Gil, & Aura Hernandez-Sabate. (2012). Error Analysis for Lucas-Kanade Based Schemes. In 9th International Conference on Image Analysis and Recognition (Vol. 7324, pp. 184–191). LNCS. Springer-Verlag Berlin Heidelberg.
Abstract: Optical flow is a valuable tool for motion analysis in medical imaging sequences. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in medical sequences. This paper presents an error analysis of Lucas-Kanade schemes in terms of intrinsic design errors and numerical stability of the algorithm. Our analysis provides a confidence measure that is naturally correlated to the accuracy of the flow field. Our experiments show the higher predictive value of our confidence measure compared to existing measures.
Keywords: Optical flow, Confidence measure, Lucas-Kanade, Cardiac Magnetic Resonance
|