|
Jaume Garcia, Debora Gil, & Aura Hernandez-Sabate. (2010). Endowing Canonical Geometries to Cardiac Structures. In O. Camara, M. Pop, K. Rhode, M. Sermesant, N. Smith, & A. Young (Eds.), Statistical Atlases And Computational Models Of The Heart (Vol. 6364, pp. 124–133). LNCS. Springer Berlin / Heidelberg.
Abstract: International conference on Cardiac electrophysiological simulation challenge
In this paper, we show that canonical (shape-based) geometries can be endowed to cardiac structures using tubular coordinates defined over their medial axis. We give an analytic formulation of these geometries by means of B-Splines. Since B-Splines present vector space structure PCA can be applied to their control points and statistical models relating boundaries and the interior of the anatomical structures can be derived. We demonstrate the applicability in two cardiac structures, the 3D Left Ventricular volume, and the 2D Left-Right ventricle set in 2D Short Axis view.
|
|
|
Debora Gil, Jaume Garcia, Aura Hernandez-Sabate, & Enric Marti. (2010). Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy. In 8th Medical Imaging (Vol. 7623, 304). SPIE.
Abstract: Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.
|
|
|
Aura Hernandez-Sabate, Monica Mitiko, Sergio Shiguemi, & Debora Gil. (2010). A validation protocol for assessing cardiac phase retrieval in IntraVascular UltraSound. In Computing in Cardiology (Vol. 37, pp. 899–902). IEEE.
Abstract: A good reliable approach to cardiac triggering is of utmost importance in obtaining accurate quantitative results of atherosclerotic plaque burden from the analysis of IntraVascular UltraSound. Although, in the last years, there has been an increase in research of methods for retrospective gating, there is no general consensus in a validation protocol. Many methods are based on quality assessment of longitudinal cuts appearance and those reporting quantitative numbers do not follow a standard protocol. Such heterogeneity in validation protocols makes faithful comparison across methods a difficult task. We propose a validation protocol based on the variability of the retrieved cardiac phase and explore the capability of several quality measures for quantifying such variability. An ideal detector, suitable for its application in clinical practice, should produce stable phases. That is, it should always sample the same cardiac cycle fraction. In this context, one should measure the variability (variance) of a candidate sampling with respect a ground truth (reference) sampling, since the variance would indicate how spread we are aiming a target. In order to quantify the deviation between the sampling and the ground truth, we have considered two quality scores reported in the literature: signed distance to the closest reference sample and distance to the right of each reference sample. We have also considered the residuals of the regression line of reference against candidate sampling. The performance of the measures has been explored on a set of synthetic samplings covering different cardiac cycle fractions and variabilities. From our simulations, we conclude that the metrics related to distances are sensitive to the shift considered while the residuals are robust against fraction and variabilities as far as one can establish a pair-wise correspondence between candidate and reference. We will further investigate the impact of false positive and negative detections in experimental data.
|
|
|
Patricia Marquez. (2010). Conditions Ensuring Accuracy of Local Optical Flow Schemes (Vol. 157). Master's thesis, , Bellaterra 08193, Barcelona, Spain.
Abstract: Accurate computation of optical flow is a key-point in many image processing fields. Detection of anomalous and unpredicted agents (such as pedestrians, bikers or cars) in urban scenes or pathology discrimination in medical imaging sequences, to mention just a two. The above kinds sequences present two main difficulties for standard optical flow techniques. On one hand, variability in acquisition conditions (illuminance, medical imaging modality, ...) force an alterantive representation for images fulfilling the britghtness constancy constrain. On the hand, current variational schemes produce oversmoothed fields unable to properly model discontinuous behaviours such as collisions or functionless pathological areas. This master project explores the abilities and limitations of local and global optical flow approaches. The master student will put especial emphasis in the theoretical grounds behind in order to design a variational framework combining the theoretical advantages of the considered techniques. In particular an optical flow based on Gabor phase tracking (developed in the group for medical imaging) will be generalized to urban scenes.
|
|
|
Ferran Poveda, Jaume Garcia, Enric Marti, & Debora Gil. (2010). Validation of the myocardial architecture in DT-MRI tractography. In Medical Image Computing in Catalunya: Graduate Student Workshop (pp. 29–30). Girona (Spain).
Abstract: Deep understanding of myocardial structure may help to link form and funcion of the heart unraveling crucial knowledge for medical and surgical clinical procedures and studies. In this work we introduce two visualization techniques based on DT-MRI streamlining able to decipher interesting properties of the architectural organization of the heart.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Petia Radeva, Jordi Vitria, Fernando Azpiroz, & Juan Malagelada. (2010). Method for automatic classification of in vivo images.
Abstract: A method for automatically detecting a post-duodenal boundary in an image stream of the gastrointestinal (GI) tract. The image stream is sampled to obtain a reduced set of images for processing. The reduced set of images is filtered to remove non-valid frames or non-valid portions of frames, thereby generating a filtered set of valid images. A polar representation of the valid images is generated. Textural features of the polar representation are processed to detect the post-duodenal boundary of the GI tract.
|
|
|
Miguel Angel Bautista, Sergio Escalera, Xavier Baro, Oriol Pujol, Jordi Vitria, & Petia Radeva. (2010). Compact Evolutive Design of Error-Correcting Output Codes. Supervised and Unsupervised Ensemble Methods and Applications. In European Conference on Machine Learning (Vol. I, pp. 119–128).
|
|
|
Carolina Malagelada, F.De Lorio, Fernando Azpiroz, Santiago Segui, Petia Radeva, Anna Accarino, et al. (2010). Intestinal Dysmotility in Patients with Functional Intestinal Disorders Demonstrated by Computer Vision Analysis of Capsule Endoscopy Images. In 18th United European Gastroenterology Week (Vol. 56, pp. A19–20).
|
|
|
Sophie Wuerger, Kaida Xiao, Chenyang Fu, & Dimosthenis Karatzas. (2010). Colour-opponent mechanisms are not affected by age-related chromatic sensitivity changes. OPO - Ophthalmic and Physiological Optics, 30(5), 635–659.
Abstract: The purpose of this study was to assess whether age-related chromatic sensitivity changes are associated with corresponding changes in hue perception in a large sample of colour-normal observers over a wide age range (n = 185; age range: 18-75 years). In these observers we determined both the sensitivity along the protan, deutan and tritan line; and settings for the four unique hues, from which the characteristics of the higher-order colour mechanisms can be derived. We found a significant decrease in chromatic sensitivity due to ageing, in particular along the tritan line. From the unique hue settings we derived the cone weightings associated with the colour mechanisms that are at equilibrium for the four unique hues. We found that the relative cone weightings (w(L) /w(M) and w(L) /w(S)) associated with the unique hues were independent of age. Our results are consistent with previous findings that the unique hues are rather constant with age while chromatic sensitivity declines. They also provide evidence in favour of the hypothesis that higher-order colour mechanisms are equipped with flexible cone weightings, as opposed to fixed weights. The mechanism underlying this compensation is still poorly understood.
|
|
|
Koen E.A. van de Sande, Theo Gevers, & C.G.M. Snoek. (2010). Evaluating Color Descriptors for Object and Scene Recognition. TPAMI - IEEE Transaction on Pattern Analysis and Machine Intelligence, 32(9), 1582–1596.
Abstract: Impact factor: 5.308
Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.
|
|
|
N. Serrano, L. Tarazon, D. Perez, Oriol Ramos Terrades, & S. Juan. (2010). The GIDOC Prototype. In 10th International Workshop on Pattern Recognition in Information Systems (pp. 82–89).
Abstract: Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription.
GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions.
|
|
|
Monica Piñol. (2010). Adaptative Vocabulary Tree for Image Classification using Reinforcement Learning (Vol. 162). Master's thesis, , .
|
|
|
Jean-Marc Ogier, Wenyin Liu, & Josep Llados (Eds.). (2010). Graphics Recognition: Achievements, Challenges, and Evolution (Vol. 6020). LNCS. Springer Link.
|
|
|
Joan Arnedo-Moreno, & Agata Lapedriza. (2010). Visualizing key authenticity: turning your face into your public key. In 6th China International Conference on Information Security and Cryptology (pp. 605–618). LNCS.
Abstract: Biometric information has become a technology complementary to cryptography, allowing to conveniently manage cryptographic data. Two important needs are ful lled: rst of all, making such data always readily available, and additionally, making its legitimate owner easily identi able. In this work we propose a signature system which integrates face recognition biometrics with and identity-based signature scheme, so the user's face e ectively becomes his public key and system ID. Thus, other users may verify messages using photos of the claimed sender, providing a reasonable trade-o between system security and usability, as well as a much more straightforward public key authenticity and distribution process.
|
|
|
Joan Mas, Gemma Sanchez, & Josep Llados. (2010). SSP: Sketching slide Presentations, a Syntactic Approach. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 118–129). LNCS. Springer Berlin Heidelberg.
Abstract: The design of a slide presentation is a creative process. In this process first, humans visualize in their minds what they want to explain. Then, they have to be able to represent this knowledge in an understandable way. There exists a lot of commercial software that allows to create our own slide presentations but the creativity of the user is rather limited. In this article we present an application that allows the user to create and visualize a slide presentation from a sketch. A slide may be seen as a graphical document or a diagram where its elements are placed in a particular spatial arrangement. To describe and recognize slides a syntactic approach is proposed. This approach is based on an Adjacency Grammar and a parsing methodology to cope with this kind of grammars. The experimental evaluation shows the performance of our methodology from a qualitative and a quantitative point of view. Six different slides containing different number of symbols, from 4 to 7, have been given to the users and they have drawn them without restrictions in the order of the elements. The quantitative results give an idea on how suitable is our methodology to describe and recognize the different elements in a slide.
|
|