|
Jaume Garcia, Debora Gil, Luis Badiella, Aura Hernandez-Sabate, Francesc Carreras, Sandra Pujades, et al. (2010). A Normalized Framework for the Design of Feature Spaces Assessing the Left Ventricular Function. TMI - IEEE Transactions on Medical Imaging, 29(3), 733–745.
Abstract: A through description of the left ventricle functionality requires combining complementary regional scores. A main limitation is the lack of multiparametric normality models oriented to the assessment of regional wall motion abnormalities (RWMA). This paper covers two main topics involved in RWMA assessment. We propose a general framework allowing the fusion and comparison across subjects of different regional scores. Our framework is used to explore which combination of regional scores (including 2-D motion and strains) is better suited for RWMA detection. Our statistical analysis indicates that for a proper (within interobserver variability) identification of RWMA, models should consider motion and extreme strains.
|
|
|
Jaume Garcia, Albert Andaluz, Debora Gil, & Francesc Carreras. (2010). Decoupled External Forces in a Predictor-Corrector Segmentation Scheme for LV Contours in Tagged MR Images. In 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 4805–4808).
Abstract: Computation of functional regional scores requires proper identification of LV contours. On one hand, manual segmentation is robust, but it is time consuming and requires high expertise. On the other hand, the tag pattern in TMR sequences is a problem for automatic segmentation of LV boundaries. We propose a segmentation method based on a predictorcorrector (Active Contours – Shape Models) scheme. Special stress is put in the definition of the AC external forces. First, we introduce a semantic description of the LV that discriminates myocardial tissue by using texture and motion descriptors. Second, in order to ensure convergence regardless of the initial contour, the external energy is decoupled according to the orientation of the edges in the image potential. We have validated the model in terms of error in segmented contours and accuracy of regional clinical scores.
|
|
|
Jaume Garcia, Debora Gil, & Aura Hernandez-Sabate. (2010). Endowing Canonical Geometries to Cardiac Structures. In O. Camara, M. Pop, K. Rhode, M. Sermesant, N. Smith, & A. Young (Eds.), Statistical Atlases And Computational Models Of The Heart (Vol. 6364, pp. 124–133). LNCS. Springer Berlin / Heidelberg.
Abstract: International conference on Cardiac electrophysiological simulation challenge
In this paper, we show that canonical (shape-based) geometries can be endowed to cardiac structures using tubular coordinates defined over their medial axis. We give an analytic formulation of these geometries by means of B-Splines. Since B-Splines present vector space structure PCA can be applied to their control points and statistical models relating boundaries and the interior of the anatomical structures can be derived. We demonstrate the applicability in two cardiac structures, the 3D Left Ventricular volume, and the 2D Left-Right ventricle set in 2D Short Axis view.
|
|
|
Debora Gil, Jaume Garcia, Aura Hernandez-Sabate, & Enric Marti. (2010). Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy. In 8th Medical Imaging (Vol. 7623, 304). SPIE.
Abstract: Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.
|
|
|
Aura Hernandez-Sabate, Monica Mitiko, Sergio Shiguemi, & Debora Gil. (2010). A validation protocol for assessing cardiac phase retrieval in IntraVascular UltraSound. In Computing in Cardiology (Vol. 37, pp. 899–902). IEEE.
Abstract: A good reliable approach to cardiac triggering is of utmost importance in obtaining accurate quantitative results of atherosclerotic plaque burden from the analysis of IntraVascular UltraSound. Although, in the last years, there has been an increase in research of methods for retrospective gating, there is no general consensus in a validation protocol. Many methods are based on quality assessment of longitudinal cuts appearance and those reporting quantitative numbers do not follow a standard protocol. Such heterogeneity in validation protocols makes faithful comparison across methods a difficult task. We propose a validation protocol based on the variability of the retrieved cardiac phase and explore the capability of several quality measures for quantifying such variability. An ideal detector, suitable for its application in clinical practice, should produce stable phases. That is, it should always sample the same cardiac cycle fraction. In this context, one should measure the variability (variance) of a candidate sampling with respect a ground truth (reference) sampling, since the variance would indicate how spread we are aiming a target. In order to quantify the deviation between the sampling and the ground truth, we have considered two quality scores reported in the literature: signed distance to the closest reference sample and distance to the right of each reference sample. We have also considered the residuals of the regression line of reference against candidate sampling. The performance of the measures has been explored on a set of synthetic samplings covering different cardiac cycle fractions and variabilities. From our simulations, we conclude that the metrics related to distances are sensitive to the shift considered while the residuals are robust against fraction and variabilities as far as one can establish a pair-wise correspondence between candidate and reference. We will further investigate the impact of false positive and negative detections in experimental data.
|
|
|
Patricia Marquez. (2010). Conditions Ensuring Accuracy of Local Optical Flow Schemes (Vol. 157). Master's thesis, , Bellaterra 08193, Barcelona, Spain.
Abstract: Accurate computation of optical flow is a key-point in many image processing fields. Detection of anomalous and unpredicted agents (such as pedestrians, bikers or cars) in urban scenes or pathology discrimination in medical imaging sequences, to mention just a two. The above kinds sequences present two main difficulties for standard optical flow techniques. On one hand, variability in acquisition conditions (illuminance, medical imaging modality, ...) force an alterantive representation for images fulfilling the britghtness constancy constrain. On the hand, current variational schemes produce oversmoothed fields unable to properly model discontinuous behaviours such as collisions or functionless pathological areas. This master project explores the abilities and limitations of local and global optical flow approaches. The master student will put especial emphasis in the theoretical grounds behind in order to design a variational framework combining the theoretical advantages of the considered techniques. In particular an optical flow based on Gabor phase tracking (developed in the group for medical imaging) will be generalized to urban scenes.
|
|
|
Ferran Poveda, Jaume Garcia, Enric Marti, & Debora Gil. (2010). Validation of the myocardial architecture in DT-MRI tractography. In Medical Image Computing in Catalunya: Graduate Student Workshop (pp. 29–30). Girona (Spain).
Abstract: Deep understanding of myocardial structure may help to link form and funcion of the heart unraveling crucial knowledge for medical and surgical clinical procedures and studies. In this work we introduce two visualization techniques based on DT-MRI streamlining able to decipher interesting properties of the architectural organization of the heart.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Petia Radeva, Jordi Vitria, Fernando Azpiroz, & Juan Malagelada. (2010). Method for automatic classification of in vivo images.
Abstract: A method for automatically detecting a post-duodenal boundary in an image stream of the gastrointestinal (GI) tract. The image stream is sampled to obtain a reduced set of images for processing. The reduced set of images is filtered to remove non-valid frames or non-valid portions of frames, thereby generating a filtered set of valid images. A polar representation of the valid images is generated. Textural features of the polar representation are processed to detect the post-duodenal boundary of the GI tract.
|
|
|
Miguel Angel Bautista, Sergio Escalera, Xavier Baro, Oriol Pujol, Jordi Vitria, & Petia Radeva. (2010). Compact Evolutive Design of Error-Correcting Output Codes. Supervised and Unsupervised Ensemble Methods and Applications. In European Conference on Machine Learning (Vol. I, pp. 119–128).
|
|
|
Carolina Malagelada, F.De Lorio, Fernando Azpiroz, Santiago Segui, Petia Radeva, Anna Accarino, et al. (2010). Intestinal Dysmotility in Patients with Functional Intestinal Disorders Demonstrated by Computer Vision Analysis of Capsule Endoscopy Images. In 18th United European Gastroenterology Week (Vol. 56, pp. A19–20).
|
|
|
Sophie Wuerger, Kaida Xiao, Chenyang Fu, & Dimosthenis Karatzas. (2010). Colour-opponent mechanisms are not affected by age-related chromatic sensitivity changes. OPO - Ophthalmic and Physiological Optics, 30(5), 635–659.
Abstract: The purpose of this study was to assess whether age-related chromatic sensitivity changes are associated with corresponding changes in hue perception in a large sample of colour-normal observers over a wide age range (n = 185; age range: 18-75 years). In these observers we determined both the sensitivity along the protan, deutan and tritan line; and settings for the four unique hues, from which the characteristics of the higher-order colour mechanisms can be derived. We found a significant decrease in chromatic sensitivity due to ageing, in particular along the tritan line. From the unique hue settings we derived the cone weightings associated with the colour mechanisms that are at equilibrium for the four unique hues. We found that the relative cone weightings (w(L) /w(M) and w(L) /w(S)) associated with the unique hues were independent of age. Our results are consistent with previous findings that the unique hues are rather constant with age while chromatic sensitivity declines. They also provide evidence in favour of the hypothesis that higher-order colour mechanisms are equipped with flexible cone weightings, as opposed to fixed weights. The mechanism underlying this compensation is still poorly understood.
|
|
|
Koen E.A. van de Sande, Theo Gevers, & C.G.M. Snoek. (2010). Evaluating Color Descriptors for Object and Scene Recognition. TPAMI - IEEE Transaction on Pattern Analysis and Machine Intelligence, 32(9), 1582–1596.
Abstract: Impact factor: 5.308
Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.
|
|
|
N. Serrano, L. Tarazon, D. Perez, Oriol Ramos Terrades, & S. Juan. (2010). The GIDOC Prototype. In 10th International Workshop on Pattern Recognition in Information Systems (pp. 82–89).
Abstract: Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription.
GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions.
|
|
|
Monica Piñol. (2010). Adaptative Vocabulary Tree for Image Classification using Reinforcement Learning (Vol. 162). Master's thesis, , .
|
|
|
Jean-Marc Ogier, Wenyin Liu, & Josep Llados (Eds.). (2010). Graphics Recognition: Achievements, Challenges, and Evolution (Vol. 6020). LNCS. Springer Link.
|
|