Home | [31–40] << 41 42 43 44 45 46 47 48 49 50 >> [51–60] |
Records | |||||
---|---|---|---|---|---|
Author | Frederic Sampedro; Anna Domenech; Sergio Escalera | ||||
Title | Static and dynamic computational cancer spread quantification in whole body FDG-PET/CT scans | Type | Journal Article | ||
Year | 2014 | Publication | Journal of Medical Imaging and Health Informatics | Abbreviated Journal | JMIHI |
Volume | 4 | Issue | 6 | Pages | 825-831 |
Keywords | CANCER SPREAD; COMPUTER AIDED DIAGNOSIS; MEDICAL IMAGING; TUMOR QUANTIFICATION | ||||
Abstract | In this work we address the computational cancer spread quantification scenario in whole body FDG-PET/CT scans. At the static level, this setting can be modeled as a clustering problem on the set of 3D connected components of the whole body PET tumoral segmentation mask carried out by nuclear medicine physicians. At the dynamic level, and ad-hoc algorithm is proposed in order to quantify the cancer spread time evolution which, when combined with other existing indicators, gives rise to the metabolic tumor volume-aggressiveness-spread time evolution chart, a novel tool that we claim that would prove useful in nuclear medicine and oncological clinical or research scenarios. Good performance results of the proposed methodologies both at the clinical and technological level are shown using a dataset of 48 segmented whole body FDG-PET/CT scans. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ SDE2014b | Serial | 2548 | ||
Permanent link to this record | |||||
Author | Q. Bao; Marçal Rusiñol; M.Coustaty; Muhammad Muzzamil Luqman; C.D. Tran; Jean-Marc Ogier | ||||
Title | Delaunay triangulation-based features for Camera-based document image retrieval system | Type | Conference Article | ||
Year | 2016 | Publication | 12th IAPR Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 1-6 | ||
Keywords | Camera-based Document Image Retrieval; Delaunay Triangulation; Feature descriptors; Indexing | ||||
Abstract | In this paper, we propose a new feature vector, named DElaunay TRIangulation-based Features (DETRIF), for real-time camera-based document image retrieval. DETRIF is computed based on the geometrical constraints from each pair of adjacency triangles in delaunay triangulation which is constructed from centroids of connected components. Besides, we employ a hashing-based indexing system in order to evaluate the performance of DETRIF and to compare it with other systems such as LLAH and SRIF. The experimentation is carried out on two datasets comprising of 400 heterogeneous-content complex linguistic map images (huge size, 9800 X 11768 pixels resolution)and 700 textual document images. | ||||
Address | Santorini; Greece; April 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.061; 600.084; 600.077 | Approved | no | ||
Call Number | Admin @ si @ BRC2016 | Serial | 2757 | ||
Permanent link to this record | |||||
Author | Angel Sappa; Fadi Dornaika; Daniel Ponsa; David Geronimo; Antonio Lopez | ||||
Title | An Efficient Approach to Onboard Stereo Vision System Pose Estimation | Type | Journal Article | ||
Year | 2008 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 9 | Issue | 3 | Pages | 476–490 |
Keywords | Camera extrinsic parameter estimation, ground plane estimation, onboard stereo vision system | ||||
Abstract | This paper presents an efficient technique for estimating the pose of an onboard stereo vision system relative to the environment’s dominant surface area, which is supposed to be the road surface. Unlike previous approaches, it can be used either for urban or highway scenarios since it is not based on a specific visual traffic feature extraction but on 3-D raw data points. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact 2-D representation of the original 3-D data points is computed. Then, a RANdom SAmple Consensus (RANSAC) based least-squares approach is used to fit a plane to the road. Fast RANSAC fitting is obtained by selecting points according to a probability function that takes into account the density of points at a given depth. Finally, stereo camera height and pitch angle are computed related to the fitted road plane. The proposed technique is intended to be used in driverassistance systems for applications such as vehicle or pedestrian detection. Experimental results on urban environments, which are the most challenging scenarios (i.e., flat/uphill/downhill driving, speed bumps, and car’s accelerations), are presented. These results are validated with manually annotated ground truth. Additionally, comparisons with previous works are presented to show the improvements in the central processing unit processing time, as well as in the accuracy of the obtained results. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ SDP2008 | Serial | 1000 | ||
Permanent link to this record | |||||
Author | Daniela Rato; Miguel Oliveira; Vitor Santos; Manuel Gomes; Angel Sappa | ||||
Title | A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells | Type | Journal Article | ||
Year | 2022 | Publication | Journal of Manufacturing Systems | Abbreviated Journal | JMANUFSYST |
Volume | 64 | Issue | Pages | 497-507 | |
Keywords | Calibration; Collaborative cell; Multi-modal; Multi-sensor | ||||
Abstract | Collaborative robotic industrial cells are workspaces where robots collaborate with human operators. In this context, safety is paramount, and for that a complete perception of the space where the collaborative robot is inserted is necessary. To ensure this, collaborative cells are equipped with a large set of sensors of multiple modalities, covering the entire work volume. However, the fusion of information from all these sensors requires an accurate extrinsic calibration. The calibration of such complex systems is challenging, due to the number of sensors and modalities, and also due to the small overlapping fields of view between the sensors, which are positioned to capture different viewpoints of the cell. This paper proposes a sensor to pattern methodology that can calibrate a complex system such as a collaborative cell in a single optimization procedure. Our methodology can tackle RGB and Depth cameras, as well as LiDARs. Results show that our methodology is able to accurately calibrate a collaborative cell containing three RGB cameras, a depth camera and three 3D LiDARs. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Science Direct | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO | Approved | no | ||
Call Number | Admin @ si @ ROS2022 | Serial | 3750 | ||
Permanent link to this record | |||||
Author | Baiyu Chen; Sergio Escalera; Isabelle Guyon; Victor Ponce; N. Shah; Marc Oliu | ||||
Title | Overcoming Calibration Problems in Pattern Labeling with Pairwise Ratings: Application to Personality Traits | Type | Conference Article | ||
Year | 2016 | Publication | 14th European Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Calibration of labels; Label bias; Ordinal labeling; Variance Models; Bradley-Terry-Luce model; Continuous labels; Regression; Personality traits; Crowd-sourced labels | ||||
Abstract | We address the problem of calibration of workers whose task is to label patterns with continuous variables, which arises for instance in labeling images of videos of humans with continuous traits. Worker bias is particularly dicult to evaluate and correct when many workers contribute just a few labels, a situation arising typically when labeling is crowd-sourced. In the scenario of labeling short videos of people facing a camera with personality traits, we evaluate the feasibility of the pairwise ranking method to alleviate bias problems. Workers are exposed to pairs of videos at a time and must order by preference. The variable levels are reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. This method may at first sight, seem prohibitively expensive because for N videos, p = N (N-1)/2 pairs must be potentially processed by workers rather that N videos. However, by performing extensive simulations, we determine an empirical law for the scaling of the number of pairs needed as a function of the number of videos in order to achieve a given accuracy of score reconstruction and show that the pairwise method is a ordable. We apply the method to the labeling of a large scale dataset of 10,000 videos used in the ChaLearn Apparent Personality Trait challenge. | ||||
Address | Amsterdam; The Netherlands; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ CEG2016 | Serial | 2829 | ||
Permanent link to this record | |||||
Author | Debora Gil; Rosa Maria Ortiz; Carles Sanchez; Antoni Rosell | ||||
Title | Objective endoscopic measurements of central airway stenosis. A pilot study | Type | Journal Article | ||
Year | 2018 | Publication | Respiration | Abbreviated Journal | RES |
Volume | 95 | Issue | Pages | 63–69 | |
Keywords | Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis | ||||
Abstract | Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.075; 600.096; 600.145 | Approved | no | ||
Call Number | Admin @ si @ GOS2018 | Serial | 3043 | ||
Permanent link to this record | |||||
Author | Carles Sanchez;F. Javier Sanchez; Antoni Rosell; Debora Gil | ||||
Title | An illumination model of the trachea appearance in videobronchoscopy images | Type | Book Chapter | ||
Year | 2012 | Publication | Image Analysis and Recognition | Abbreviated Journal | LNCS |
Volume | 7325 | Issue | Pages | 313-320 | |
Keywords | Bronchoscopy, tracheal ring, stenosis assesment, trachea appearance model, segmentation | ||||
Abstract | Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways. This imaging modality provides realistic images and allows non-invasive minimal intervention procedures. Tracheal procedures are routinary interventions that require assessment of the percentage of obstructed pathway for injury (stenosis) detection. Visual assessment in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error.
This paper introduces an automatic method for the estimation of steneosed trachea percentage reduction in videobronchoscopic images. We look for tracheal rings , whose deformation determines the degree of obstruction. For ring extraction , we present a ring detector based on an illumination and appearance model. This model allows us to parametrise the ring detection. Finally, we can infer optimal estimation parameters for any video resolution. |
||||
Address | Aveiro, Portugal | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Lecture Notes in Computer Science | Abbreviated Series Title | LNCS | |
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-31297-7 | Medium | |
Area | 800 | Expedition | Conference | ICIAR | |
Notes | MV;IAM | Approved | no | ||
Call Number | IAM @ iam @ SSR2012 | Serial | 1898 | ||
Permanent link to this record | |||||
Author | Carles Sanchez | ||||
Title | Tracheal ring detection in bronchoscopy | Type | Report | ||
Year | 2011 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 168 | Issue | Pages | ||
Keywords | Bronchoscopy, tracheal ring, segmentation | ||||
Abstract | Endoscopy is the process in which a camera is introduced inside a human.
Given that endoscopy provides realistic images (in contrast to other modalities) and allows non-invase minimal intervention procedures (which can aid in diagnosis and surgical interventions), its use has spreaded during last decades. In this project we will focus on bronchoscopic procedures, during which the camera is introduced through the trachea in order to have a diagnostic of the patient. The diagnostic interventions are focused on: degree of stenosis (reduction in tracheal area), prosthesis or early diagnosis of tumors. In the first case, assessment of the luminal area and the calculation of the diameters of the tracheal rings are required. A main limitation is that all the process is done by hand, which means that the doctor takes all the measurements and decisions just by looking at the screen. As far as we know there is no computational framework for helping the doctors in the diagnosis. This project will consist of analysing bronchoscopic videos in order to extract useful information for the diagnostic of the degree of stenosis. In particular we will focus on segmentation of the tracheal rings. As a result of this project several strategies (for detecting tracheal rings) had been implemented in order to compare their performance. |
||||
Address | |||||
Corporate Author | Thesis | Master's thesis | |||
Publisher | Place of Publication | Editor | Debora Gil, F.Javier Sanchez | ||
Language | english | Summary Language | english | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM;MV | Approved | no | ||
Call Number | IAM @ iam @ San2011 | Serial | 1841 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Debora Gil; Jorge Bernal; F. Javier Sanchez; Marta Diez-Ferrer; Antoni Rosell | ||||
Title | Navigation Path Retrieval from Videobronchoscopy using Bronchial Branches | Type | Conference Article | ||
Year | 2016 | Publication | 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshops | Abbreviated Journal | |
Volume | 9401 | Issue | Pages | 62-70 | |
Keywords | Bronchoscopy navigation; Lumen center; Brochial branches; Navigation path; Videobronchoscopy | ||||
Abstract | Bronchoscopy biopsy can be used to diagnose lung cancer without risking complications of other interventions like transthoracic needle aspiration. During bronchoscopy, the clinician has to navigate through the bronchial tree to the target lesion. A main drawback is the difficulty to check whether the exploration is following the correct path. The usual guidance using fluoroscopy implies repeated radiation of the clinician, while alternative systems (like electromagnetic navigation) require specific equipment that increases intervention costs. We propose to compute the navigated path using anatomical landmarks extracted from the sole analysis of videobronchoscopy images. Such landmarks allow matching the current exploration to the path previously planned on a CT to indicate clinician whether the planning is being correctly followed or not. We present a feasibility study of our landmark based CT-video matching using bronchoscopic videos simulated on a virtual bronchoscopy interactive interface. | ||||
Address | Quebec; Canada; September 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAIW | ||
Notes | IAM; MV; 600.060; 600.075 | Approved | no | ||
Call Number | Admin @ si @ SGB2016 | Serial | 2885 | ||
Permanent link to this record | |||||
Author | Juan Jose Rubio; Takahiro Kashiwa; Teera Laiteerapong; Wenlong Deng; Kohei Nagai; Sergio Escalera; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger | ||||
Title | Multi-class structural damage segmentation using fully convolutional networks | Type | Journal Article | ||
Year | 2019 | Publication | Computers in Industry | Abbreviated Journal | COMPUTIND |
Volume | 112 | Issue | Pages | 103121 | |
Keywords | Bridge damage detection; Deep learning; Semantic segmentation | ||||
Abstract | Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ RKL2019 | Serial | 3315 | ||
Permanent link to this record | |||||
Author | Spyridon Bakas; Mauricio Reyes; Andras Jakab; Stefan Bauer; Markus Rempfler; Alessandro Crimi; Russell Takeshi Shinohara; Christoph Berger; Sung Min Ha; Martin Rozycki; Marcel Prastawa; Esther Alberts; Jana Lipkova; John Freymann; Justin Kirby; Michel Bilello; Hassan Fathallah-Shaykh; Roland Wiest; Jan Kirschke; Benedikt Wiestler; Rivka Colen; Aikaterini Kotrotsou; Pamela Lamontagne; Daniel Marcus; Mikhail Milchenko; Arash Nazeri; Marc-Andre Weber; Abhishek Mahajan; Ujjwal Baid; Dongjin Kwon; Manu Agarwal; Mahbubul Alam; Alberto Albiol; Antonio Albiol; Varghese Alex; Tuan Anh Tran; Tal Arbel; Aaron Avery; Subhashis Banerjee; Thomas Batchelder; Kayhan Batmanghelich; Enzo Battistella; Martin Bendszus; Eze Benson; Jose Bernal; George Biros; Mariano Cabezas; Siddhartha Chandra; Yi-Ju Chang; Joseph Chazalon; Shengcong Chen; Wei Chen; Jefferson Chen; Kun Cheng; Meinel Christoph; Roger Chylla; Albert Clérigues; Anthony Costa; Xiaomeng Cui; Zhenzhen Dai; Lutao Dai; Eric Deutsch; Changxing Ding; Chao Dong; Wojciech Dudzik; Theo Estienne; Hyung Eun Shin; Richard Everson; Jonathan Fabrizio; Longwei Fang; Xue Feng; Lucas Fidon; Naomi Fridman; Huan Fu; David Fuentes; David G Gering; Yaozong Gao; Evan Gates; Amir Gholami; Mingming Gong; Sandra Gonzalez-Villa; J Gregory Pauloski; Yuanfang Guan; Sheng Guo; Sudeep Gupta; Meenakshi H Thakur; Klaus H Maier-Hein; Woo-Sup Han; Huiguang He; Aura Hernandez-Sabate; Evelyn Herrmann; Naveen Himthani; Winston Hsu; Cheyu Hsu; Xiaojun Hu; Xiaobin Hu; Yan Hu; Yifan Hu; Rui Hua | ||||
Title | Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge | Type | Miscellaneous | ||
Year | 2018 | Publication | Arxiv | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | BraTS; challenge; brain; tumor; segmentation; machine learning; glioma; glioblastoma; radiomics; survival; progression; RECIST | ||||
Abstract | Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multiparametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e. 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in preoperative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that undergone gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ BRJ2018 | Serial | 3252 | ||
Permanent link to this record | |||||
Author | Santi Puch; Irina Sanchez; Aura Hernandez-Sabate; Gemma Piella; Vesna Prckovska | ||||
Title | Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation | Type | Conference Article | ||
Year | 2018 | Publication | International MICCAI Brainlesion Workshop | Abbreviated Journal | |
Volume | 11384 | Issue | Pages | 393-405 | |
Keywords | Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution | ||||
Abstract | In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAIW | ||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ PSH2018 | Serial | 3251 | ||
Permanent link to this record | |||||
Author | Laura Igual; Joan Carles Soliva; Antonio Hernandez; Sergio Escalera; Xavier Jimenez ; Oscar Vilarroya; Petia Radeva | ||||
Title | A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder | Type | Journal Article | ||
Year | 2011 | Publication | BioMedical Engineering Online | Abbreviated Journal | BEO |
Volume | 10 | Issue | 105 | Pages | 1-23 |
Keywords | Brain caudate nucleus; segmentation; MRI; atlas-based strategy; Graph Cut framework | ||||
Abstract | Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1475-925X | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ ISH2011 | Serial | 1882 | ||
Permanent link to this record | |||||
Author | Arash Akbarinia; C. Alejandro Parraga | ||||
Title | Feedback and Surround Modulated Boundary Detection | Type | Journal Article | ||
Year | 2018 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 126 | Issue | 12 | Pages | 1367–1380 |
Keywords | Boundary detection; Surround modulation; Biologically-inspired vision | ||||
Abstract | Edges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The “classical approach” assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal from V1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | NEUROBIT; 600.068; 600.072 | Approved | no | ||
Call Number | Admin @ si @ AkP2018b | Serial | 2991 | ||
Permanent link to this record | |||||
Author | Esmitt Ramirez; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil | ||||
Title | Image-Based Bronchial Anatomy Codification for Biopsy Guiding in Video Bronchoscopy | Type | Conference Article | ||
Year | 2018 | Publication | OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis | Abbreviated Journal | |
Volume | 11041 | Issue | Pages | ||
Keywords | Biopsy guiding; Bronchoscopy; Lung biopsy; Intervention guiding; Airway codification | ||||
Abstract | Bronchoscopy examinations allow biopsy of pulmonary nodules with minimum risk for the patient. Even for experienced bronchoscopists, it is difficult to guide the bronchoscope to most distal lesions and obtain an accurate diagnosis. This paper presents an image-based codification of the bronchial anatomy for bronchoscopy biopsy guiding. The 3D anatomy of each patient is codified as a binary tree with nodes representing bronchial levels and edges labeled using their position on images projecting the 3D anatomy from a set of branching points. The paths from the root to leaves provide a codification of navigation routes with spatially consistent labels according to the anatomy observes in video bronchoscopy explorations. We evaluate our labeling approach as a guiding system in terms of the number of bronchial levels correctly codified, also in the number of labels-based instructions correctly supplied, using generalized mixed models and computer-generated data. Results obtained for three independent observers prove the consistency and reproducibility of our guiding system. We trust that our codification based on viewer’s projection might be used as a foundation for the navigation process in Virtual Bronchoscopy systems. | ||||
Address | Granada; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAIW | ||
Notes | IAM; 600.096; 600.075; 601.323; 600.145 | Approved | no | ||
Call Number | Admin @ si @ RSB2018b | Serial | 3137 | ||
Permanent link to this record |