toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Jordi Vitria; J. Llacer edit  doi
openurl 
  Title Reconstructing 3D light microscopic images using the EM algorithm Type Journal
  Year 1996 Publication Pattern Recognition Letters Abbreviated Journal  
  Volume 17 Issue 14 Pages (down) 1491–1498  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ ViL1996 Serial 74  
Permanent link to this record
 

 
Author Sergio Escalera; Jordi Gonzalez; Xavier Baro; Jamie Shotton edit  doi
openurl 
  Title Guest Editor Introduction to the Special Issue on Multimodal Human Pose Recovery and Behavior Analysis Type Journal Article
  Year 2016 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 28 Issue Pages (down) 1489 - 1491  
  Keywords  
  Abstract The sixteen papers in this special section focus on human pose recovery and behavior analysis (HuPBA). This is one of the most challenging topics in computer vision, pattern analysis, and machine learning. It is of critical importance for application areas that include gaming, computer interaction, human robot interaction, security, commerce, assistive technologies and rehabilitation, sports, sign language recognition, and driver assistance technology, to mention just a few. In essence, HuPBA requires dealing with the articulated nature of the human body, changes in appearance due to clothing, and the inherent problems of clutter scenes, such as background artifacts, occlusions, and illumination changes. These papers represent the most recent research in this field, including new methods considering still images, image sequences, depth data, stereo vision, 3D vision, audio, and IMUs, among others.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; ISE;MV; Approved no  
  Call Number Admin @ si @ Serial 2851  
Permanent link to this record
 

 
Author Santiago Segui; Michal Drozdzal; Fernando Vilariño; Carolina Malagelada; Fernando Azpiroz; Petia Radeva; Jordi Vitria edit   pdf
doi  openurl
  Title Categorization and Segmentation of Intestinal Content Frames for Wireless Capsule Endoscopy Type Journal Article
  Year 2012 Publication IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal TITB  
  Volume 16 Issue 6 Pages (down) 1341-1352  
  Keywords  
  Abstract Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content – clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1089-7771 ISBN Medium  
  Area 800 Expedition Conference  
  Notes MILAB; MV; OR;SIAI Approved no  
  Call Number Admin @ si @ SDV2012 Serial 2124  
Permanent link to this record
 

 
Author Fosca De Iorio; C. Malagelada; Fernando Azpiroz; M. Maluenda; C. Violanti; Laura Igual; Jordi Vitria; Juan R. Malagelada edit  doi
openurl 
  Title Intestinal motor activity, endoluminal motion and transit Type Journal Article
  Year 2009 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT  
  Volume 21 Issue 12 Pages (down) 1264–e119  
  Keywords  
  Abstract A programme for evaluation of intestinal motility has been recently developed based on endoluminal image analysis using computer vision methodology and machine learning techniques. Our aim was to determine the effect of intestinal muscle inhibition on wall motion, dynamics of luminal content and transit in the small bowel. Fourteen healthy subjects ingested the endoscopic capsule (Pillcam, Given Imaging) in fasting conditions. Seven of them received glucagon (4.8 microg kg(-1) bolus followed by a 9.6 microg kg(-1) h(-1) infusion during 1 h) and in the other seven, fasting activity was recorded, as controls. This dose of glucagon has previously shown to inhibit both tonic and phasic intestinal motor activity. Endoluminal image and displacement was analyzed by means of a computer vision programme specifically developed for the evaluation of muscular activity (contractile and non-contractile patterns), intestinal contents, endoluminal motion and transit. Thirty-minute periods before, during and after glucagon infusion were analyzed and compared with equivalent periods in controls. No differences were found in the parameters measured during the baseline (pretest) periods when comparing glucagon and control experiments. During glucagon infusion, there was a significant reduction in contractile activity (0.2 +/- 0.1 vs 4.2 +/- 0.9 luminal closures per min, P < 0.05; 0.4 +/- 0.1 vs 3.4 +/- 1.2% of images with radial wrinkles, P < 0.05) and a significant reduction of endoluminal motion (82 +/- 9 vs 21 +/- 10% of static images, P < 0.05). Endoluminal image analysis, by means of computer vision and machine learning techniques, can reliably detect reduced intestinal muscle activity and motion.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DMA2009 Serial 1251  
Permanent link to this record
 

 
Author Jorge Bernal; Nima Tajkbaksh; F. Javier Sanchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjorn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Cordova; Cristina Sanchez Montes; Suryakanth R. Gurudu; Gloria Fernandez Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace edit   pdf
doi  openurl
  Title Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results from the MICCAI 2015 Endoscopic Vision Challenge Type Journal Article
  Year 2017 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI  
  Volume 36 Issue 6 Pages (down) 1231 - 1249  
  Keywords Endoscopic vision; Polyp Detection; Handcrafted features; Machine Learning; Validation Framework  
  Abstract Colonoscopy is the gold standard for colon cancer screening though still some polyps are missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack
of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection subchallenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted
Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks (CNNs) are the state of the art. Nevertheless it is also demonstrated that combining different methodologies can lead to an improved overall performance.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; 600.096; 600.075 Approved no  
  Call Number Admin @ si @ BTS2017 Serial 2949  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: