|
Jose Manuel Alvarez, Felipe Lumbreras, Antonio Lopez, & Theo Gevers. (2012). Understanding Road Scenes using Visual Cues.
|
|
|
Matthias S. Keil, Agata Lapedriza, David Masip, & Jordi Vitria. (2008). Preferred Spatial Frequencies for Human Face Processing Are Associated with Optimal Class Discrimination in the Machine. PLoS ONE 3(7):e2590, DOI:10.1371/journal.pone.0002590.
|
|
|
Oriol Ramos Terrades, Ernest Valveny, & Salvatore Tabbone. (2008). On the Combination of Ridgelets Descriptors for Symbol Recognition. In Graphics Recognition: Recent Advances and New Oportunities, W. Lius, J. Llados, J.M. Ogier, LNCS 5046:104–113.
|
|
|
Debora Gil, Jaume Garcia, Mariano Vazquez, Ruth Aris, & Guilleaume Houzeaux. (2008). Patient-Sensitive Anatomic and Functional 3D Model of the Left Ventricle Function. In 8th World Congress on Computational Mechanichs (WCCM8).
Abstract: Early diagnosis and accurate treatment of Left Ventricle (LV) dysfunction significantly increases the patient survival. Impairment of LV contractility due to cardiovascular diseases is reflected in its motion patterns. Recent advances in medical imaging, such as Magnetic Resonance (MR), have encouraged research on 3D simulation and modelling of the LV dynamics. Most of the existing 3D models [1] consider just the gross anatomy of the LV and restore a truncated ellipse which deforms along the cardiac cycle. The contraction mechanics of any muscle strongly depends on the spatial orientation of its muscular fibers since the motion that the muscle undergoes mainly takes place along the fibers. It follows that such simplified models do not allow evaluation of the heart electro-mechanical function and coupling, which has recently risen as the key point for understanding the LV functionality [2]. In order to thoroughly understand the LV mechanics it is necessary to consider the complete anatomy of the LV given by the orientation of the myocardial fibres in 3D space as described by Torrent Guasp [3].
We propose developing a 3D patient-sensitive model of the LV integrating, for the first time, the ven- tricular band anatomy (fibers orientation), the LV gross anatomy and its functionality. Such model will represent the LV function as a natural consequence of its own ventricular band anatomy. This might be decisive in restoring a proper LV contraction in patients undergoing pace marker treatment.
The LV function is defined as soon as the propagation of the contractile electromechanical pulse has been modelled. In our experiments we have used the wave equation for the propagation of the electric pulse. The electromechanical wave moves on the myocardial surface and should have a conductivity tensor oriented along the muscular fibers. Thus, whatever mathematical model for electric pulse propa- gation [4] we consider, the complete anatomy of the LV should be extracted.
The LV gross anatomy is obtained by processing multi slice MR images recorded for each patient. Information about the myocardial fibers distribution can only be extracted by Diffusion Tensor Imag- ing (DTI), which can not provide in vivo information for each patient. As a first approach, we have
Figure 1: Scheme for the Left Ventricle Patient-Sensitive Model.
computed an average model of fibers from several DTI studies of canine hearts. This rough anatomy is the input for our electro-mechanical propagation model simulating LV dynamics. The average fiber orientation is updated until the simulated LV motion agrees with the experimental evidence provided by the LV motion observed in tagged MR (TMR) sequences. Experimental LV motion is recovered by applying image processing, differential geometry and interpolation techniques to 2D TMR slices [5]. The pipeline in figure 1 outlines the interaction between simulations and experimental data leading to our patient-tailored model.
Keywords: Left Ventricle, Electromechanical Models, Image Processing, Magnetic Resonance.
|
|
|
X. Varona, Jordi Gonzalez, Ignasi Rius, & Juan J. Villanueva. (2008). Importance of Detection for Video Surveillance Applications. Optical Engineering, vol. 47(8), 087201/1–9.
|
|
|
Jordi Gonzalez, & Thomas B. Moeslund. (2008). Tracking Humans for the Evaluation of their Motion in Image Sequences.
|
|
|
Juan J. Villanueva. (2008). Visualization, Imaging, and Image Processing,.
|
|
|
Paramveer S. Dhillon, Francisco Javier Orozco, & Jordi Gonzalez. (2008). Real-Time Monocular Face Tracking Using and Active Camera.
|
|
|
Eduard Vazquez, & Maria Vanrell. (2008). Eines per al desenvolupament de competencies de enginyeria en un assignatura de Intel·ligencia Artificial.
|
|
|
Liu Wenyin, Josep Llados, & Jean-Marc Ogier. (2008). Graphics Recognition. Recent Advances and New Opportunities. (Vol. 5046). LNCS.
|
|
|
Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2008). Providing Automatic Multilingual Text Generation to Artificial Cognitive Systems.
|
|
|
X. Varona, Antoni Jaume-i-Capo, Jordi Gonzalez, & Francisco Jose Perales. (2008). Toward Natural Interaction through Visual Recognition of Body Gestures in Real-Time. Interacting with Computers, diu 10,1016/j.intcom.2008.10.001, available on line.
|
|
|
Robert Benavente, Ernest Valveny, Jaume Garcia, Agata Lapedriza, Miquel Ferrer, & Gemma Sanchez. (2008). Una experiencia de adaptacion al EEES de las asignaturas de programacion en Ingenieria Informatica.
|
|
|
O. Rodriguez-Leor, Carlo Gatta, E Fernandez-Nofrerias, Oriol Pujol, Neus Salvatella, C. Bosch, et al. (2008). Computationally Efficient Image-based IVUS Pullbacks Gating. European Heart Journal, ESC Supplement, Munich, 2008, p. 775.
|
|
|
Aymen Azaza. (2018). Context, Motion and Semantic Information for Computational Saliency (Joost Van de Weijer, & Ali Douik, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The main objective of this thesis is to highlight the salient object in an image or in a video sequence. We address three important—but in our opinion
insufficiently investigated—aspects of saliency detection. Firstly, we start
by extending previous research on saliency which explicitly models the information provided from the context. Then, we show the importance of
explicit context modelling for saliency estimation. Several important works
in saliency are based on the usage of object proposals. However, these methods
focus on the saliency of the object proposal itself and ignore the context.
To introduce context in such saliency approaches, we couple every object
proposal with its direct context. This allows us to evaluate the importance
of the immediate surround (context) for its saliency. We propose several
saliency features which are computed from the context proposals including
features based on omni-directional and horizontal context continuity. Secondly,
we investigate the usage of top-downmethods (high-level semantic
information) for the task of saliency prediction since most computational
methods are bottom-up or only include few semantic classes. We propose
to consider a wider group of object classes. These objects represent important
semantic information which we will exploit in our saliency prediction
approach. Thirdly, we develop a method to detect video saliency by computing
saliency from supervoxels and optical flow. In addition, we apply the
context features developed in this thesis for video saliency detection. The
method combines shape and motion features with our proposed context
features. To summarize, we prove that extending object proposals with their
direct context improves the task of saliency detection in both image and
video data. Also the importance of the semantic information in saliency
estimation is evaluated. Finally, we propose a newmotion feature to detect
saliency in video data. The three proposed novelties are evaluated on standard
saliency benchmark datasets and are shown to improve with respect to
state-of-the-art.
|
|