|
Carles Sanchez, Jorge Bernal, F. Javier Sanchez, Antoni Rosell, Marta Diez-Ferrer, & Debora Gil. (2015). Towards On-line Quantification of Tracheal Stenosis from Videobronchoscopy. IJCAR - International Journal of Computer Assisted Radiology and Surgery, 10(6), 935–945.
|
|
|
R. Clariso, David Masip, & A. Rius. (2014). Student projects empowering mobile learning in higher education. RUSC - Revista de Universidad y Sociedad del Conocimiento, 192–207.
|
|
|
Michal Drozdzal, Santiago Segui, Petia Radeva, Carolina Malagelada, Fernando Azpiroz, & Jordi Vitria. (2015). Motility bar: a new tool for motility analysis of endoluminal videos. CBM - Computers in Biology and Medicine, 65, 320–330.
Abstract: Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information.
Keywords: Small intestine; Motility; WCE; Computer vision; Image classification
|
|
|
Carolina Malagelada, Michal Drozdzal, Santiago Segui, Sara Mendez, Jordi Vitria, Petia Radeva, et al. (2015). Classification of functional bowel disorders by objective physiological criteria based on endoluminal image analysis. AJPGI - American Journal of Physiology-Gastrointestinal and Liver Physiology, 309(6), G413–G419.
Abstract: We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.
Keywords: capsule endoscopy; computer vision analysis; functional bowel disorders; intestinal motility; machine learning
|
|
|
Juan Ramon Terven Salinas, Bogdan Raducanu, Maria Elena Meza-de-Luna, & Joaquin Salas. (2016). Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices. NEUCOM - Neurocomputing, 175(B), 866–876.
Abstract: During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset.
Keywords: Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices
|
|