toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (up) Jorge Bernal; F. Javier Sanchez; Fernando Vilariño edit   pdf
doi  openurl
  Title Impact of Image Preprocessing Methods on Polyp Localization in Colonoscopy Frames Type Conference Article
  Year 2013 Publication 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society Abbreviated Journal  
  Volume Issue Pages 7350 - 7354  
  Keywords  
  Abstract In this paper we present our image preprocessing methods as a key part of our automatic polyp localization scheme. These methods are used to assess the impact of different endoluminal scene elements when characterizing polyps. More precisely we tackle the influence of specular highlights, blood vessels and black mask surrounding the scene. Experimental results prove that the appropriate handling of these elements leads to a great improvement in polyp localization results.  
  Address Osaka; Japan; July 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1557-170X ISBN Medium  
  Area 800 Expedition Conference EMBC  
  Notes MV; 600.047; 600.060;SIAI Approved no  
  Call Number Admin @ si @ BSV2013 Serial 2286  
Permanent link to this record
 

 
Author (up) Jorge Bernal; F. Javier Sanchez; Fernando Vilariño edit   pdf
url  doi
openurl 
  Title Towards Automatic Polyp Detection with a Polyp Appearance Model Type Journal Article
  Year 2012 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 45 Issue 9 Pages 3166-3182  
  Keywords Colonoscopy,PolypDetection,RegionSegmentation,SA-DOVA descriptot  
  Abstract This work aims at the automatic polyp detection by using a model of polyp appearance in the context of the analysis of colonoscopy videos. Our method consists of three stages: region segmentation, region description and region classification. The performance of our region segmentation method guarantees that if a polyp is present in the image, it will be exclusively and totally contained in a single region. The output of the algorithm also defines which regions can be considered as non-informative. We define as our region descriptor the novel Sector Accumulation-Depth of Valleys Accumulation (SA-DOVA), which provides a necessary but not sufficient condition for the polyp presence. Finally, we classify our segmented regions according to the maximal values of the SA-DOVA descriptor. Our preliminary classification results are promising, especially when classifying those parts of the image that do not contain a polyp inside.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0031-3203 ISBN Medium  
  Area 800 Expedition Conference IbPRIA  
  Notes MV;SIAI Approved no  
  Call Number Admin @ si @ BSV2012; IAM @ iam Serial 1997  
Permanent link to this record
 

 
Author (up) Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Debora Gil; Cristina Rodriguez de Miguel; Fernando Vilariño edit   pdf
doi  openurl
  Title WM-DOVA Maps for Accurate Polyp Highlighting in Colonoscopy: Validation vs. Saliency Maps from Physicians Type Journal Article
  Year 2015 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG  
  Volume 43 Issue Pages 99-111  
  Keywords Polyp localization; Energy Maps; Colonoscopy; Saliency; Valley detection  
  Abstract We introduce in this paper a novel polyp localization method for colonoscopy videos. Our method is based on a model of appearance for polyps which defines polyp boundaries in terms of valley information. We propose the integration of valley information in a robust way fostering complete, concave and continuous boundaries typically associated to polyps. This integration is done by using a window of radial sectors which accumulate valley information to create WMDOVA1 energy maps related with the likelihood of polyp presence. We perform a double validation of our maps, which include the introduction of two new databases, including the first, up to our knowledge, fully annotated database with clinical metadata associated. First we assess that the highest value corresponds with the location of the polyp in the image. Second, we show that WM-DOVA energy maps can be comparable with saliency maps obtained from physicians' fixations obtained via an eye-tracker. Finally, we prove that our method outperforms state-of-the-art computational saliency results. Our method shows good performance, particularly for small polyps which are reported to be the main sources of polyp miss-rate, which indicates the potential applicability of our method in clinical practice.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0895-6111 ISBN Medium  
  Area Expedition Conference  
  Notes MV; IAM; 600.047; 600.060; 600.075;SIAI Approved no  
  Call Number Admin @ si @ BSF2015 Serial 2609  
Permanent link to this record
 

 
Author (up) Jorge Bernal; Fernando Vilariño; F. Javier Sanchez edit   pdf
openurl 
  Title Feature Detectors and Feature Descriptors: Where We Are Now Type Report
  Year 2010 Publication CVC Technical Report Abbreviated Journal  
  Volume 154 Issue Pages  
  Keywords  
  Abstract Feature Detection and Feature Description are clearly nowadays topics. Many Computer Vision applications rely on the use of several of these techniques in order to extract the most significant aspects of an image so they can help in some tasks such as image retrieval, image registration, object recognition, object categorization and texture classification, among others. In this paper we define what Feature Detection and Description are and then we present an extensive collection of several methods in order to show the different techniques that are being used right now. The aim of this report is to provide a glimpse of what is being used currently in these fields and to serve as a starting point for future endeavours.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference  
  Notes MV;SIAI Approved no  
  Call Number Admin @ si @ BVS2010; IAM @ iam @ BVS2010 Serial 1348  
Permanent link to this record
 

 
Author (up) Jorge Bernal; Fernando Vilariño; F. Javier Sanchez edit   pdf
url  doi
isbn  openurl
  Title Towards Intelligent Systems for Colonoscopy Type Book Chapter
  Year 2011 Publication Colonoscopy Abbreviated Journal  
  Volume 1 Issue Pages 257-282  
  Keywords  
  Abstract In this chapter we present tools that can be used to build intelligent systems for colonoscopy.
The idea is, by using methods based on computer vision and artificial intelligence, add significant value to the colonoscopy procedure. Intelligent systems are being used to assist in other medical interventions
 
  Address  
  Corporate Author Thesis  
  Publisher Intech Place of Publication Editor Paul Miskovitz  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-953-307-568-6 Medium  
  Area 800 Expedition Conference  
  Notes MV;SIAI Approved no  
  Call Number IAM @ iam @ BVS2011 Serial 1697  
Permanent link to this record
 

 
Author (up) Jorge Bernal; Fernando Vilariño; F. Javier Sanchez; M. Arnold; Anarta Ghosh; Gerard Lacey edit   pdf
doi  isbn
openurl 
  Title Experts vs Novices: Applying Eye-tracking Methodologies in Colonoscopy Video Screening for Polyp Search Type Conference Article
  Year 2014 Publication 2014 Symposium on Eye Tracking Research and Applications Abbreviated Journal  
  Volume Issue Pages 223-226  
  Keywords  
  Abstract We present in this paper a novel study aiming at identifying the differences in visual search patterns between physicians of diverse levels of expertise during the screening of colonoscopy videos. Physicians were clustered into two groups -experts and novices- according to the number of procedures performed, and fixations were captured by an eye-tracker device during the task of polyp search in different video sequences. These fixations were integrated into heat maps, one for each cluster. The obtained maps were validated over a ground truth consisting of a mask of the polyp, and the comparison between experts and novices was performed by using metrics such as reaction time, dwelling time and energy concentration ratio. Experimental results show a statistically significant difference between experts and novices, and the obtained maps show to be a useful tool for the characterisation of the behaviour of each group.  
  Address USA; March 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-2751-0 Medium  
  Area Expedition Conference ETRA  
  Notes MV; 600.047; 600.060;SIAI Approved no  
  Call Number Admin @ si @ BVS2014 Serial 2448  
Permanent link to this record
 

 
Author (up) Jorge Bernal; Joan M. Nuñez; F. Javier Sanchez; Fernando Vilariño edit   pdf
doi  openurl
  Title Polyp Segmentation Method in Colonoscopy Videos by means of MSA-DOVA Energy Maps Calculation Type Conference Article
  Year 2014 Publication 3rd MICCAI Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging Abbreviated Journal  
  Volume 8680 Issue Pages 41-49  
  Keywords Image segmentation; Polyps; Colonoscopy; Valley information; Energy maps  
  Abstract In this paper we present a novel polyp region segmentation method for colonoscopy videos. Our method uses valley information associated to polyp boundaries in order to provide an initial segmentation. This first segmentation is refined to eliminate boundary discontinuities caused by image artifacts or other elements of the scene. Experimental results over a publicly annotated database show that our method outperforms both general and specific segmentation methods by providing more accurate regions rich in polyp content. We also prove how image preprocessing is needed to improve final polyp region segmentation.  
  Address Boston; USA; September 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CLIP  
  Notes MV; 600.060; 600.044; 600.047;SIAI Approved no  
  Call Number Admin @ si @ BNS2014 Serial 2502  
Permanent link to this record
 

 
Author (up) Jorge Bernal; Nima Tajkbaksh; F. Javier Sanchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjorn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Cordova; Cristina Sanchez Montes; Suryakanth R. Gurudu; Gloria Fernandez Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace edit   pdf
doi  openurl
  Title Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results from the MICCAI 2015 Endoscopic Vision Challenge Type Journal Article
  Year 2017 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI  
  Volume 36 Issue 6 Pages 1231 - 1249  
  Keywords Endoscopic vision; Polyp Detection; Handcrafted features; Machine Learning; Validation Framework  
  Abstract Colonoscopy is the gold standard for colon cancer screening though still some polyps are missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack
of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection subchallenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted
Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks (CNNs) are the state of the art. Nevertheless it is also demonstrated that combining different methodologies can lead to an improved overall performance.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; 600.096; 600.075 Approved no  
  Call Number Admin @ si @ BTS2017 Serial 2949  
Permanent link to this record
 

 
Author (up) Jorge Charco; Angel Sappa; Boris X. Vintimilla edit   pdf
url  isbn
openurl 
  Title Human Pose Estimation through a Novel Multi-view Scheme Type Conference Article
  Year 2022 Publication 17th International Conference on Computer Vision Theory and Applications (VISAPP 2022) Abbreviated Journal  
  Volume 5 Issue Pages 855-862  
  Keywords Multi-view Scheme; Human Pose Estimation; Relative Camera Pose; Monocular Approach  
  Abstract This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human pose estimation problem. The proposed approach first obtains the human body joints of a set of images, which are captured from different views at the same time. Then, it enhances the obtained joints by using a
multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and
comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements in the accuracy of body joints estimations.
 
  Address On line; Feb 6, 2022 – Feb 8, 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2184-4321 ISBN 978-989-758-555-5 Medium  
  Area Expedition Conference VISAPP  
  Notes MSIAU; 600.160 Approved no  
  Call Number Admin @ si @ CSV2022 Serial 3689  
Permanent link to this record
 

 
Author (up) Jorge Charco; Angel Sappa; Boris X. Vintimilla; Henry Velesaca edit   pdf
url  openurl
  Title Camera pose estimation in multi-view environments: From virtual scenarios to the real world Type Journal Article
  Year 2021 Publication Image and Vision Computing Abbreviated Journal IVC  
  Volume 110 Issue Pages 104182  
  Keywords  
  Abstract This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used, highlighting the importance on the similarity between virtual-real scenarios.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU; 600.130; 600.122 Approved no  
  Call Number Admin @ si @ CSV2021 Serial 3577  
Permanent link to this record
 

 
Author (up) Jorge Charco; Angel Sappa; Boris X. Vintimilla; Henry Velesaca edit   pdf
doi  openurl
  Title Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
  Year 2020 Publication 15th International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The transfer learning consists of first training the network using pairs of images from the virtual-world scenario
considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight
of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose estimation accuracy using the proposed model, as well as further improvements when the transfer learning strategy (synthetic-world data transfer learning real-world data) is considered to tackle the limitation on the
training due to the reduced number of pairs of real-images on most of the public data sets.
 
  Address Valletta; Malta; February 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes MSIAU; 600.130; 601.349; 600.122 Approved no  
  Call Number Admin @ si @ CSV2020 Serial 3433  
Permanent link to this record
 

 
Author (up) Jorge Charco; Angel Sappa; Boris X. Vintimilla; Henry Velesaca edit   pdf
doi  isbn
openurl 
  Title Human Body Pose Estimation in Multi-view Environments Type Book Chapter
  Year 2022 Publication ICT Applications for Smart Cities. Intelligent Systems Reference Library Abbreviated Journal  
  Volume 224 Issue Pages 79-99  
  Keywords  
  Abstract This chapter tackles the challenging problem of human pose estimation in multi-view environments to handle scenes with self-occlusions. The proposed approach starts by first estimating the camera pose—extrinsic parameters—in multi-view scenarios; due to few real image datasets, different virtual scenes are generated by using a special simulator, for training and testing the proposed convolutional neural network based approaches. Then, these extrinsic parameters are used to establish the relation between different cameras into the multi-view scheme, which captures the pose of the person from different points of view at the same time. The proposed multi-view scheme allows to robustly estimate human body joints’ position even in situations where they are occluded. This would help to avoid possible false alarms in behavioral analysis systems of smart cities, as well as applications for physical therapy, safe moving assistance for the elderly among other. The chapter concludes by presenting experimental results in real scenes by using state-of-the-art and the proposed multi-view approaches.  
  Address September 2022  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title ISRL  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-031-06306-0 Medium  
  Area Expedition Conference  
  Notes MSIAU; MACO Approved no  
  Call Number Admin @ si @ CSV2022b Serial 3810  
Permanent link to this record
 

 
Author (up) Jorge Charco; Boris X. Vintimilla; Angel Sappa edit   pdf
openurl 
  Title Deep learning based camera pose estimation in multi-view environment Type Conference Article
  Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet Based System Abbreviated Journal  
  Volume Issue Pages  
  Keywords Deep learning; Camera pose estimation; Multiview environment; Siamese architecture  
  Abstract This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from
scratch on a large data set that takes as input a pair of imagesfrom the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose.
 
  Address Las Palmas de Gran Canaria; November 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SITIS  
  Notes MSIAU; 600.086; 600.130; 600.122 Approved no  
  Call Number Admin @ si @ CVS2018 Serial 3194  
Permanent link to this record
 

 
Author (up) Jose A. Garcia; David Masip; Valerio Sbragaglia; Jacopo Aguzzi edit   pdf
openurl 
  Title Automated Identification and Tracking of Nephrops norvegicus (L.) Using Infrared and Monochromatic Blue Light Type Conference Article
  Year 2016 Publication 19th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages  
  Keywords computer vision; video analysis; object recognition; tracking; behaviour; social; decapod; Nephrops norvegicus  
  Abstract Automated video and image analysis can be a very efficient tool to analyze
animal behavior based on sociality, especially in hard access environments
for researchers. The understanding of this social behavior can play a key role in the sustainable design of capture policies of many species. This paper proposes the use of computer vision algorithms to identify and track a specific specie, the Norway lobster, Nephrops norvegicus, a burrowing decapod with relevant commercial value which is captured by trawling. These animals can only be captured when are engaged in seabed excursions, which are strongly related with their social behavior.
This emergent behavior is modulated by the day-night cycle, but their social
interactions remain unknown to the scientific community. The paper introduces an identification scheme made of four distinguishable black and white tags (geometric shapes). The project has recorded 15-day experiments in laboratory pools, under monochromatic blue light (472 nm.) and darkness conditions (recorded using Infra Red light). Using this massive image set, we propose a comparative of state-ofthe-art computer vision algorithms to distinguish and track the different animals’ movements. We evaluate the robustness to the high noise presence in the infrared video signals and free out-of-plane rotations due to animal movement. The experiments show promising accuracies under a cross-validation protocol, being adaptable to the automation and analysis of large scale data. In a second contribution, we created an extensive dataset of shapes (46027 different shapes) from four daily experimental video recordings, which will be available to the community.
 
  Address Barcelona; Spain; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CCIA  
  Notes OR;MV; Approved no  
  Call Number Admin @ si @ GMS2016 Serial 2816  
Permanent link to this record
 

 
Author (up) Jose A. Garcia; David Masip; Valerio Sbragaglia; Jacopo Aguzzi edit   pdf
openurl 
  Title Using ORB, BoW and SVM to identificate and track tagged Norway lobster Nephrops Norvegicus (L.) Type Conference Article
  Year 2016 Publication 3rd International Conference on Maritime Technology and Engineering Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Sustainable capture policies of many species strongly depend on the understanding of their social behaviour. Nevertheless, the analysis of emergent behaviour in marine species poses several challenges. Usually animals are captured and observed in tanks, and their behaviour is inferred from their dynamics and interactions. Therefore, researchers must deal with thousands of hours of video data. Without loss of generality, this paper proposes a computer
vision approach to identify and track specific species, the Norway lobster, Nephrops norvegicus. We propose an identification scheme were animals are marked using black and white tags with a geometric shape in the center (holed
triangle, filled triangle, holed circle and filled circle). Using a massive labelled dataset; we extract local features based on the ORB descriptor. These features are a posteriori clustered, and we construct a Bag of Visual Words feature vector per animal. This approximation yields us invariance to rotation
and translation. A SVM classifier achieves generalization results above 99%. In a second contribution, we will make the code and training data publically available.
 
  Address Lisboa; Portugal; July 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MARTECH  
  Notes OR;MV; Approved no  
  Call Number Admin @ si @ GMS2016b Serial 2817  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: