toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links (down)
Author Patrick Brandao; O. Zisimopoulos; E. Mazomenos; G. Ciutib; Jorge Bernal; M. Visentini-Scarzanell; A. Menciassi; P. Dario; A. Koulaouzidis; A. Arezzo; D.J. Hawkes; D. Stoyanov edit   pdf
url  doi
openurl 
  Title Towards a computed-aided diagnosis system in colonoscopy: Automatic polyp segmentation using convolution neural networks Type Journal
  Year 2018 Publication Journal of Medical Robotics Research Abbreviated Journal JMRR  
  Volume 3 Issue 2 Pages  
  Keywords convolutional neural networks; colonoscopy; computer aided diagnosis  
  Abstract Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC) and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), ne-tune them and study their capabilities for polyp segmentation and detection. We additionally use Shape-from-Shading (SfS) to recover depth and provide a richer representation of the tissue's structure in colonoscopy images. Depth is
incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation IU of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp
detection, the top performing models we propose surpass the current state of the art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the rst work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; no menciona Approved no  
  Call Number BZM2018 Serial 2976  
Permanent link to this record
 

 
Author Santiago Segui; Michal Drozdzal; Guillem Pascual; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria edit   pdf
url  openurl
  Title Generic Feature Learning for Wireless Capsule Endoscopy Analysis Type Journal Article
  Year 2016 Publication Computers in Biology and Medicine Abbreviated Journal CBM  
  Volume 79 Issue Pages 163-172  
  Keywords Wireless capsule endoscopy; Deep learning; Feature learning; Motility analysis  
  Abstract The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR; MILAB;MV; Approved no  
  Call Number Admin @ si @ SDP2016 Serial 2836  
Permanent link to this record
 

 
Author Mario Rojas; David Masip; A. Todorov; Jordi Vitria edit  url
doi  openurl
  Title Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models Type Journal Article
  Year 2011 Publication PloS one Abbreviated Journal Plos  
  Volume 6 Issue 8 Pages e23323  
  Keywords  
  Abstract JCR Impact Factor 2010: 4.411
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions
 
  Address  
  Corporate Author Thesis  
  Publisher Public Library of Science Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ RMT2011 Serial 1883  
Permanent link to this record
 

 
Author Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera edit  url
openurl 
  Title Análisis de la expresión oral y gestual en proyectos fin de carrera vía un sistema de visión artificial Type Journal Article
  Year 2011 Publication ReVisión Abbreviated Journal  
  Volume 4 Issue 1 Pages  
  Keywords  
  Abstract La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1989-1199 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; MILAB;MV Approved no  
  Call Number Admin @ si @ PGB2011d Serial 2514  
Permanent link to this record
 

 
Author Jorge Bernal edit   pdf
url  openurl
  Title Polyp Localization and Segmentation in Colonoscopy Images by Means of a Model of Appearance for Polyps Type Journal Article
  Year 2014 Publication Electronic Letters on Computer Vision and Image Analysis Abbreviated Journal ELCVIA  
  Volume 13 Issue 2 Pages 9-10  
  Keywords Colonoscopy; polyp localization; polyp segmentation; Eye-tracking  
  Abstract Colorectal cancer is the fourth most common cause of cancer death worldwide and its survival rate depends on the stage in which it is detected on hence the necessity for an early colon screening. There are several screening techniques but colonoscopy is still nowadays the gold standard, although it has some drawbacks such as the miss rate. Our contribution, in the field of intelligent systems for colonoscopy, aims at providing a polyp localization and a polyp segmentation system based on a model of appearance for polyps. To develop both methods we define a model of appearance for polyps, which describes a polyp as enclosed by intensity valleys. The novelty of our contribution resides on the fact that we include in our model aspects of the image formation and we also consider the presence of other elements from the endoluminal scene such as specular highlights and blood vessels, which have an impact on the performance of our methods. In order to develop our polyp localization method we accumulate valley information in order to generate energy maps, which are also used to guide the polyp segmentation. Our methods achieve promising results in polyp localization and segmentation. As we want to explore the usability of our methods we present a comparative analysis between physicians fixations obtained via an eye tracking device and our polyp localization method. The results show that our method is indistinguishable to novice physicians although it is far from expert physicians.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor Alicia Fornes; Volkmar Frinken  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV Approved no  
  Call Number Admin @ si @ Ber2014 Serial 2487  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: