|
J. Mauri, Eduard Fernandez-Nofrerias, B. Garcia del Blanco, E. Iraculis, J.A. Gomez-Hospital, J. Comin, et al. (2000). Moviment del vas en l anàlisi d imatges d ecografia intracoronària: un model matemàtic. In Congrés de la Societat Catalana de Cardiologia..
|
|
|
David Lloret, Joan Serrat, Antonio Lopez, & Juan J. Villanueva. (2002). Motion-induced error correction in ultrasound imaging..
|
|
|
Carme Julia. (2004). Motion segmentation through factorization. Application to night driving assistance.
|
|
|
Carme Julia, Joan Serrat, Antonio Lopez, Felipe Lumbreras, & Daniel Ponsa. (2006). Motion segmentation through factorization. Application to night driving assistance.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2007). Motion Segmentation from Feature Trajectories with Missing Data. In J. Marti et al.(Eds.) (Ed.), 3rd. Iberian Conference on Pattern Recognition and Image Analysis (Vol. LNCS 4477, 483–490).
|
|
|
Ignasi Rius. (2010). Motion Priors for Efficient Bayesian Tracking in Human Sequence Evaluation (Jordi Gonzalez, & Xavier Roca, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Recovering human motion by visual analysis is a challenging computer vision research
area with a lot of potential applications. Model-based tracking approaches, and in
particular particle lters, formulate the problem as a Bayesian inference task whose
aim is to sequentially estimate the distribution of the parameters of a human body
model over time. These approaches strongly rely on good dynamical and observation
models to predict and update congurations of the human body according to measurements from the image data. However, it is very dicult to design observation
models which extract useful and reliable information from image sequences robustly.
This results specially challenging in monocular tracking given that only one viewpoint
from the scene is available. Therefore, to overcome these limitations strong motion
priors are needed to guide the exploration of the state space.
The work presented in this Thesis is aimed to retrieve the 3D motion parameters
of a human body model from incomplete and noisy measurements of a monocular
image sequence. These measurements consist of the 2D positions of a reduced set of
joints in the image plane. Towards this end, we present a novel action-specic model
of human motion which is trained from several databases of real motion-captured
performances of an action, and is used as a priori knowledge within a particle ltering
scheme.
Body postures are represented by means of a simple and compact stick gure
model which uses direction cosines to represent the direction of body limbs in the 3D
Cartesian space. Then, for a given action, Principal Component Analysis is applied to
the training data to perform dimensionality reduction over the highly correlated input
data. Before the learning stage of the action model, the input motion performances
are synchronized by means of a novel dense matching algorithm based on Dynamic
Programming. The algorithm synchronizes all the motion sequences of the same
action class, nding an optimal solution in real-time.
Then, a probabilistic action model is learnt, based on the synchronized motion
examples, which captures the variability and temporal evolution of full-body motion
within a specic action. In particular, for each action, the parameters learnt are: a
representative manifold for the action consisting of its mean performance, the standard deviation from the mean performance, the mean observed direction vectors from
each motion subsequence of a given length and the expected error at a given time
instant.
Subsequently, the action-specic model is used as a priori knowledge on human
motion which improves the eciency and robustness of the overall particle filtering tracking framework. First, the dynamic model guides the particles according to similar
situations previously learnt. Then, the state space is constrained so only feasible
human postures are accepted as valid solutions at each time step. As a result, the
state space is explored more eciently as the particle set covers the most probable
body postures.
Finally, experiments are carried out using test sequences from several motion
databases. Results point out that our tracker scheme is able to estimate the rough
3D conguration of a full-body model providing only the 2D positions of a reduced
set of joints. Separate tests on the sequence synchronization method and the subsequence probabilistic matching technique are also provided.
|
|
|
Michal Drozdzal, Santiago Segui, Petia Radeva, Carolina Malagelada, Fernando Azpiroz, & Jordi Vitria. (2015). Motility bar: a new tool for motility analysis of endoluminal videos. CBM - Computers in Biology and Medicine, 65, 320–330.
Abstract: Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information.
Keywords: Small intestine; Motility; WCE; Computer vision; Image classification
|
|
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2008). Morphology Based Handwritten Line Segmentation using Foreground and Background Information. In International Conference on Frontiers in Handwriting Recognition, (241–246).
|
|
|
Joan Serrat, Jordi Vitria, & J. Pladellorens. (1991). Morphological Segmentation of Heart Scintigraphic image Sequences. In Computer Assisted Radiology..
|
|
|
Jordi Vitria, C. Gratin, D. Seron, & F. Moreso. (1995). Morphological image analysis for quantification of renal damage.
|
|
|
D. Seron, F. Moreso, C. Gratin, & Jordi Vitria. (1995). Morphological Granulometries and Quantification of Interstitial Chronic Renal Damage.
|
|
|
Jordi Vitria, X. Binefa, & Juan J. Villanueva. (1992). Morphological Algorithms for Visual Analysis of Integrated Circuits..
|
|
|
Angel Sappa, Cristhian A. Aguilera-Carrasco, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: A cross-spectral image fusion based approach. RAS - Robotics and Autonomous Systems, 85, 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is empirically obtained by means of a mutual information based evaluation metric. The objective is to have a flexible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odometry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
Keywords: Monocular visual odometry; LWIR-RGB cross-spectral imaging; Image fusion
|
|
|
Diego Alejandro Cheda. (2009). Monocular egomotion estimation for ADAS application (Vol. 148). Ph.D. thesis, , Bellaterra, Barcelona.
|
|
|
Diego Cheda, Daniel Ponsa, & Antonio Lopez. (2012). Monocular Egomotion Estimation based on Image Matching. In 1st International Conference on Pattern Recognition Applications and Methods (pp. 425–430).
|
|