|
Jaume Garcia, Debora Gil, & Aura Hernandez-Sabate. (2010). Endowing Canonical Geometries to Cardiac Structures. In O. Camara, M. Pop, K. Rhode, M. Sermesant, N. Smith, & A. Young (Eds.), Statistical Atlases And Computational Models Of The Heart (Vol. 6364, pp. 124–133). LNCS. Springer Berlin / Heidelberg.
Abstract: International conference on Cardiac electrophysiological simulation challenge
In this paper, we show that canonical (shape-based) geometries can be endowed to cardiac structures using tubular coordinates defined over their medial axis. We give an analytic formulation of these geometries by means of B-Splines. Since B-Splines present vector space structure PCA can be applied to their control points and statistical models relating boundaries and the interior of the anatomical structures can be derived. We demonstrate the applicability in two cardiac structures, the 3D Left Ventricular volume, and the 2D Left-Right ventricle set in 2D Short Axis view.
|
|
|
Salim Jouili, Salvatore Tabbone, & Ernest Valveny. (2010). Comparing Graph Similarity Measures for Graphical Recognition. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 37–48). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we evaluate four graph distance measures. The analysis is performed for document retrieval tasks. For this aim, different kind of documents are used including line drawings (symbols), ancient documents (ornamental letters), shapes and trademark-logos. The experimental results show that the performance of each graph distance measure depends on the kind of data and the graph representation technique.
|
|
|
Mathieu Nicolas Delalandre, Jean-Yves Ramel, Ernest Valveny, & Muhammad Muzzamil Luqman. (2010). A Performance Characterization Algorithm for Symbol Localization. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, 260–271). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we present an algorithm for performance characterization of symbol localization systems. This algorithm is aimed to be a more “reliable” and “open” solution to characterize the performance. To achieve that, it exploits only single points as the result of localization and offers the possibility to reconsider the localization results provided by a system. We use the information about context in groundtruth, and overall localization results, to detect the ambiguous localization results. A probability score is computed for each matching between a localization point and a groundtruth region, depending on the spatial distribution of the other regions in the groundtruth. Final characterization is given with detection rate/probability score plots, describing the sets of possible interpretations of the localization results, according to a given confidence rate. We present experimentation details along with the results for the symbol localization system of [1], exploiting a synthetic dataset of architectural floorplans and electrical diagrams (composed of 200 images and 3861 symbols).
|
|
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2010). Touching Text Character Localization in Graphical Documents using SIFT. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 199–211). LNCS. Springer Berlin Heidelberg.
Abstract: Interpretation of graphical document images is a challenging task as it requires proper understanding of text/graphics symbols present in such documents. Difficulties arise in graphical document recognition when text and symbol overlapped/touched. Intersection of text and symbols with graphical lines and curves occur frequently in graphical documents and hence separation of such symbols is very difficult.
Several pattern recognition and classification techniques exist to recognize isolated text/symbol. But, the touching/overlapping text and symbol recognition has not yet been dealt successfully. An interesting technique, Scale Invariant Feature Transform (SIFT), originally devised for object recognition can take care of overlapping problems. Even if SIFT features have emerged as a very powerful object descriptors, their employment in graphical documents context has not been investigated much. In this paper we present the adaptation of the SIFT approach in the context of text character localization (spotting) in graphical documents. We evaluate the applicability of this technique in such documents and discuss the scope of improvement by combining some state-of-the-art approaches.
Keywords: Support Vector Machine; Text Component; Graphical Line; Document Image; Scale Invariant Feature Transform
|
|
|
Marçal Rusiñol, K. Bertet, Jean-Marc Ogier, & Josep Llados. (2010). Symbol Recognition Using a Concept Lattice of Graphical Patterns. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 187–198). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we propose a new approach to recognize symbols by the use of a concept lattice. We propose to build a concept lattice in terms of graphical patterns. Each model symbol is decomposed in a set of composing graphical patterns taken as primitives. Each one of these primitives is described by boundary moment invariants. The obtained concept lattice relates which symbolic patterns compose a given graphical symbol. A Hasse diagram is derived from the context and is used to recognize symbols affected by noise. We present some preliminary results over a variation of the dataset of symbols from the GREC 2005 symbol recognition contest.
|
|
|
Joan Mas, Gemma Sanchez, & Josep Llados. (2010). SSP: Sketching slide Presentations, a Syntactic Approach. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 118–129). LNCS. Springer Berlin Heidelberg.
Abstract: The design of a slide presentation is a creative process. In this process first, humans visualize in their minds what they want to explain. Then, they have to be able to represent this knowledge in an understandable way. There exists a lot of commercial software that allows to create our own slide presentations but the creativity of the user is rather limited. In this article we present an application that allows the user to create and visualize a slide presentation from a sketch. A slide may be seen as a graphical document or a diagram where its elements are placed in a particular spatial arrangement. To describe and recognize slides a syntactic approach is proposed. This approach is based on an Adjacency Grammar and a parsing methodology to cope with this kind of grammars. The experimental evaluation shows the performance of our methodology from a qualitative and a quantitative point of view. Six different slides containing different number of symbols, from 4 to 7, have been given to the users and they have drawn them without restrictions in the order of the elements. The quantitative results give an idea on how suitable is our methodology to describe and recognize the different elements in a slide.
|
|
|
Sergio Escalera, Oriol Pujol, Eric Laciar, Jordi Vitria, Esther Pueyo, & Petia Radeva. (2010). Classification of Coronary Damage in Chronic Chagasic Patients. In M. H.(eds) V. Sgurev (Ed.), Intelligent Systems – From Theory to Practice. Studies in Computational Intelligence (Vol. 299, pp. 461–478). Springer-Verlag.
Abstract: Post Conference IEEE-IS 2008
The Chagas’ disease is endemic in all Latin America, affecting millions of people in the continent. In order to diagnose and treat the chagas’ disease, it is important to detect and measure the coronary damage of the patient. In this paper,
we analyze and categorize patients into different groups based on the coronary damage produced by the disease. Based on the features of the heart cycle extracted using high resolution ECG, a multi-class scheme of Error-Correcting Output Codes (ECOC)is formulated and successfully applied. The results show that the proposed scheme obtains significant performance improvements compared to previous works and state-of-the-art ECOC designs.
Keywords: Chagas disease; Error-Correcting Output Codes; High resolution ECG; Decoding
|
|
|
Debora Gil, Oriol Rodriguez-Leor, Petia Radeva, & Aura Hernandez-Sabate. (2007). Assessing Artery Motion Compensation in IVUS. In Computer Analysis Of Images And Patterns (Vol. 4673, pp. 213–220). Lecture Notes in Computer Science. Heidelberg: Springerlink.
Abstract: Cardiac dynamics suppression is a main issue for visual improvement and computation of tissue mechanical properties in IntraVascular UltraSound (IVUS). Although in recent times several motion compensation techniques have arisen, there is a lack of objective evaluation of motion reduction in in vivo pullbacks. We consider that the assessment protocol deserves special attention for the sake of a clinical applicability as reliable as possible. Our work focuses on defining a quality measure and a validation protocol assessing IVUS motion compensation. On the grounds of continuum mechanics laws we introduce a novel score measuring motion reduction in in vivo sequences. Synthetic experiments validate the proposed score as measure of motion parameters accuracy; while results in in vivo pullbacks show its reliability in clinical cases.
Keywords: validation standards; quality measures; IVUS motion compensation; conservation laws; Fourier development
|
|
|
Agnes Borras, & Josep Llados. (2007). Similarity-Based Object Retrieval Using Appearance and Geometric Feature Combination. In 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:113–120 (Vol. 4478, 33–39).
Abstract: This work presents a content-based image retrieval system of general purpose that deals with cluttered scenes containing a given query object. The system is flexible enough to handle with a single image of an object despite its rotation, translation and scale variations. The image content is divided in parts that are described with a combination of features based on geometrical and color properties. The idea behind the feature combination is to benefit from a fuzzy similarity computation that provides robustness and tolerance to the retrieval process. The features can be independently computed and the image parts can be easily indexed by using a table structure on every feature value. Finally a process inspired in the alignment strategies is used to check the coherence of the object parts found in a scene. Our work presents a system of easy implementation that uses an open set of features and can suit a wide variety of applications.
|
|
|
Antonio Lopez, Jiaolong Xu, Jose Luis Gomez, David Vazquez, & German Ros. (2017). From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example. In Gabriela Csurka (Ed.), Domain Adaptation in Computer Vision Applications (pp. 243–258). Springer.
Abstract: Supervised learning tends to produce more accurate classifiers than unsupervised learning in general. This implies that training data is preferred with annotations. When addressing visual perception challenges, such as localizing certain object classes within an image, the learning of the involved classifiers turns out to be a practical bottleneck. The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. A priori, the more complex the model is regarding its number of parameters, the more annotated examples are required. This annotation task is performed by human oracles, which ends up in inaccuracies and errors in the annotations (aka ground truth) since the task is inherently very cumbersome and sometimes ambiguous. As an alternative we have pioneered the use of virtual worlds for collecting such annotations automatically and with high precision. However, since the models learned with virtual data must operate in the real world, we still need to perform domain adaptation (DA). In this chapter we revisit the DA of a deformable part-based model (DPM) as an exemplifying case of virtual- to-real-world DA. As a use case, we address the challenge of vehicle detection for driver assistance, using different publicly available virtual-world data. While doing so, we investigate questions such as: how does the domain gap behave due to virtual-vs-real data with respect to dominant object appearance per domain, as well as the role of photo-realism in the virtual world.
Keywords: Domain Adaptation
|
|
|
Maryam Asadi-Aghbolaghi, Albert Clapes, Marco Bellantonio, Hugo Jair Escalante, Victor Ponce, Xavier Baro, et al. (2017). Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey. In Gesture Recognition (pp. 539–578).
Abstract: Interest in automatic action and gesture recognition has grown considerably in the last few years. This is due in part to the large number of application domains for this type of technology. As in many other computer vision areas, deep learning based methods have quickly become a reference methodology for obtaining state-of-the-art performance in both tasks. This chapter is a survey of current deep learning based methodologies for action and gesture recognition in sequences of images. The survey reviews both fundamental and cutting edge methodologies reported in the last few years. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. Details of the proposed architectures, fusion strategies, main datasets, and competitions are reviewed. Also, we summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, their highlighting features, and opportunities and challenges for future research. To the best of our knowledge this is the first survey in the topic. We foresee this survey will become a reference in this ever dynamic field of research.
Keywords: Action recognition; Gesture recognition; Deep learning architectures; Fusion strategies
|
|
|
Hana Jarraya, Muhammad Muzzamil Luqman, & Jean-Yves Ramel. (2017). Improving Fuzzy Multilevel Graph Embedding Technique by Employing Topological Node Features: An Application to Graphics Recognition. In B. Lamiroy, & R Dueire Lins (Eds.), Graphics Recognition. Current Trends and Challenges (Vol. 9657). LNCS. Springer.
|
|
|
Debora Gil, F. Javier Sanchez, Gloria Fernandez Esparrach, & Jorge Bernal. (2015). 3D Stable Spatio-temporal Polyp Localization in Colonoscopy Videos. In Computer-Assisted and Robotic Endoscopy. Revised selected papers of Second International Workshop, CARE 2015, Held in Conjunction with MICCAI 2015 (Vol. 9515, pp. 140–152). LNCS.
Abstract: Computational intelligent systems could reduce polyp miss rate in colonoscopy for colon cancer diagnosis and, thus, increase the efficiency of the procedure. One of the main problems of existing polyp localization methods is a lack of spatio-temporal stability in their response. We propose to explore the response of a given polyp localization across temporal windows in order to select
those image regions presenting the highest stable spatio-temporal response.
Spatio-temporal stability is achieved by extracting 3D watershed regions on the
temporal window. Stability in localization response is statistically determined by analysis of the variance of the output of the localization method inside each 3D region. We have explored the benefits of considering spatio-temporal stability in two different tasks: polyp localization and polyp detection. Experimental results indicate an average improvement of 21:5% in polyp localization and 43:78% in polyp detection.
Keywords: Colonoscopy, Polyp Detection, Polyp Localization, Region Extraction, Watersheds
|
|
|
Hanne Kause, Aura Hernandez-Sabate, Patricia Marquez, Andrea Fuster, Luc Florack, Hans van Assen, et al. (2015). Confidence Measures for Assessing the HARP Algorithm in Tagged Magnetic Resonance Imaging. In Statistical Atlases and Computational Models of the Heart. Revised selected papers of Imaging and Modelling Challenges 6th International Workshop, STACOM 2015, Held in Conjunction with MICCAI 2015 (Vol. 9534, pp. 69–79). LNCS. Springer International Publishing.
Abstract: Cardiac deformation and changes therein have been linked to pathologies. Both can be extracted in detail from tagged Magnetic Resonance Imaging (tMRI) using harmonic phase (HARP) images. Although point tracking algorithms have shown to have high accuracies on HARP images, these vary with position. Detecting and discarding areas with unreliable results is crucial for use in clinical support systems. This paper assesses the capability of two confidence measures (CMs), based on energy and image structure, for detecting locations with reduced accuracy in motion tracking results. These CMs were tested on a database of simulated tMRI images containing the most common artifacts that may affect tracking accuracy. CM performance is assessed based on its capability for HARP tracking error bounding and compared in terms of significant differences detected using a multi comparison analysis of variance that takes into account the most influential factors on HARP tracking performance. Results showed that the CM based on image structure was better suited to detect unreliable optical flow vectors. In addition, it was shown that CMs can be used to detect optical flow vectors with large errors in order to improve the optical flow obtained with the HARP tracking algorithm.
|
|
|
Juan Ramon Terven Salinas, Joaquin Salas, & Bogdan Raducanu. (2014). Robust Head Gestures Recognition for Assistive Technology. In Pattern Recognition (Vol. 8495, pp. 152–161). LNCS. Springer International Publishing.
Abstract: This paper presents a system capable of recognizing six head gestures: nodding, shaking, turning right, turning left, looking up, and looking down. The main difference of our system compared to other methods is that the Hidden Markov Models presented in this paper, are fully connected and consider all possible states in any given order, providing the following advantages to the system: (1) allows unconstrained movement of the head and (2) it can be easily integrated into a wearable device (e.g. glasses, neck-hung devices), in which case it can robustly recognize gestures in the presence of ego-motion. Experimental results show that this approach outperforms common methods that use restricted HMMs for each gesture.
|
|