2018 |
|
Esmitt Ramirez, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2018). "Image-Based Bronchial Anatomy Codification for Biopsy Guiding in Video Bronchoscopy " In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis (Vol. 11041).
Abstract: Bronchoscopy examinations allow biopsy of pulmonary nodules with minimum risk for the patient. Even for experienced bronchoscopists, it is difficult to guide the bronchoscope to most distal lesions and obtain an accurate diagnosis. This paper presents an image-based codification of the bronchial anatomy for bronchoscopy biopsy guiding. The 3D anatomy of each patient is codified as a binary tree with nodes representing bronchial levels and edges labeled using their position on images projecting the 3D anatomy from a set of branching points. The paths from the root to leaves provide a codification of navigation routes with spatially consistent labels according to the anatomy observes in video bronchoscopy explorations. We evaluate our labeling approach as a guiding system in terms of the number of bronchial levels correctly codified, also in the number of labels-based instructions correctly supplied, using generalized mixed models and computer-generated data. Results obtained for three independent observers prove the consistency and reproducibility of our guiding system. We trust that our codification based on viewer’s projection might be used as a foundation for the navigation process in Virtual Bronchoscopy systems.
Keywords: Biopsy guiding; Bronchoscopy; Lung biopsy; Intervention guiding; Airway codification
|
|
|
Katerine Diaz, Francesc J. Ferri, & Aura Hernandez-Sabate. (2018). "An overview of incremental feature extraction methods based on linear subspaces " . Knowledge-Based Systems, 145, 219–235.
Abstract: With the massive explosion of machine learning in our day-to-day life, incremental and adaptive learning has become a major topic, crucial to keep up-to-date and improve classification models and their corresponding feature extraction processes. This paper presents a categorized overview of incremental feature extraction based on linear subspace methods which aim at incorporating new information to the already acquired knowledge without accessing previous data. Specifically, this paper focuses on those linear dimensionality reduction methods with orthogonal matrix constraints based on global loss function, due to the extensive use of their batch approaches versus other linear alternatives. Thus, we cover the approaches derived from Principal Components Analysis, Linear Discriminative Analysis and Discriminative Common Vector methods. For each basic method, its incremental approaches are differentiated according to the subspace model and matrix decomposition involved in the updating process. Besides this categorization, several updating strategies are distinguished according to the amount of data used to update and to the fact of considering a static or dynamic number of classes. Moreover, the specific role of the size/dimension ratio in each method is considered. Finally, computational complexity, experimental setup and the accuracy rates according to published results are compiled and analyzed, and an empirical evaluation is done to compare the best approach of each kind.
|
|
|
Katerine Diaz, Jesus Martinez del Rincon, Aura Hernandez-Sabate, & Debora Gil. (2018). "Continuous head pose estimation using manifold subspace embedding and multivariate regression " . IEEE Access, 6, 18325–18334.
Abstract: In this paper, a continuous head pose estimation system is proposed to estimate yaw and pitch head angles from raw facial images. Our approach is based on manifold learningbased methods, due to their promising generalization properties shown for face modelling from images. The method combines histograms of oriented gradients, generalized discriminative common vectors and continuous local regression to achieve successful performance. Our proposal was tested on multiple standard face datasets, as well as in a realistic scenario. Results show a considerable performance improvement and a higher consistence of our model in comparison with other state-of-art methods, with angular errors varying between 9 and 17 degrees.
Keywords: Head Pose estimation; HOG features; Generalized Discriminative Common Vectors; B-splines; Multiple linear regression
|
|
|
Katerine Diaz, Jesus Martinez del Rincon, Aura Hernandez-Sabate, Marçal Rusiñol, & Francesc J. Ferri. (2018). "Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction " . Journal of Mathematical Imaging and Vision, 60(4), 512–524.
Abstract: This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies
have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.
|
|
|
Marta Diez-Ferrer, Debora Gil, Cristian Tebe, & Carles Sanchez. (2018). "Positive Airway Pressure to Enhance Computed Tomography Imaging for Airway Segmentation for Virtual Bronchoscopic Navigation " . Respiration, 96(6), 525–534.
Abstract: Abstract
RATIONALE:
Virtual bronchoscopic navigation (VBN) guidance to peripheral pulmonary lesions is often limited by insufficient segmentation of the peripheral airways.
OBJECTIVES:
To test the effect of applying positive airway pressure (PAP) during CT acquisition to improve segmentation, particularly at end-expiration.
METHODS:
CT acquisitions in inspiration and expiration with 4 PAP protocols were recorded prospectively and compared to baseline inspiratory acquisitions in 20 patients. The 4 protocols explored differences between devices (flow vs. turbine), exposures (within seconds vs. 15-min) and pressure levels (10 vs. 14 cmH2O). Segmentation quality was evaluated with the number of airways and number of endpoints reached. A generalized mixed-effects model explored the estimated effect of each protocol.
MEASUREMENTS AND MAIN RESULTS:
Patient characteristics and lung function did not significantly differ between protocols. Compared to baseline inspiratory acquisitions, expiratory acquisitions after 15 min of 14 cmH2O PAP segmented 1.63-fold more airways (95% CI 1.07-2.48; p = 0.018) and reached 1.34-fold more endpoints (95% CI 1.08-1.66; p = 0.004). Inspiratory acquisitions performed immediately under 10 cmH2O PAP reached 1.20-fold (95% CI 1.09-1.33; p < 0.001) more endpoints; after 15 min the increase was 1.14-fold (95% CI 1.05-1.24; p < 0.001).
CONCLUSIONS:
CT acquisitions with PAP segment more airways and reach more endpoints than baseline inspiratory acquisitions. The improvement is particularly evident at end-expiration after 15 min of 14 cmH2O PAP. Further studies must confirm that the improvement increases diagnostic yield when using VBN to evaluate peripheral pulmonary lesions.
Keywords: Multidetector computed tomography; Bronchoscopy; Continuous positive airway pressure; Image enhancement; Virtual bronchoscopic navigation
|
|
|
Santi Puch, Irina Sanchez, Aura Hernandez-Sabate, Gemma Piella, & Vesna Prckovska. (2018). "Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation " In International MICCAI Brainlesion Workshop (Vol. 11384, pp. 393–405).
Abstract: In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge.
Keywords: Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution
|
|
|
Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, et al. (2018)." Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge" .
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multiparametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e. 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in preoperative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that undergone gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
Keywords: BraTS; challenge; brain; tumor; segmentation; machine learning; glioma; glioblastoma; radiomics; survival; progression; RECIST
|
|
2017 |
|
Carles Sanchez, Antonio Esteban Lansaque, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2017). "Towards a Videobronchoscopy Localization System from Airway Centre Tracking " In 12th International Conference on Computer Vision Theory and Applications (pp. 352–359).
Abstract: Bronchoscopists use fluoroscopy to guide flexible bronchoscopy to the lesion to be biopsied without any kind of incision. Being fluoroscopy an imaging technique based on X-rays, the risk of developmental problems and cancer is increased in those subjects exposed to its application, so minimizing radiation is crucial. Alternative guiding systems such as electromagnetic navigation require specific equipment, increase the cost of the clinical procedure and still require fluoroscopy. In this paper we propose an image based guiding system based on the extraction of airway centres from intra-operative videos. Such anatomical landmarks are matched to the airway centreline extracted from a pre-planned CT to indicate the best path to the nodule. We present a
feasibility study of our navigation system using simulated bronchoscopic videos and a multi-expert validation of landmarks extraction in 3 intra-operative ultrathin explorations.
Keywords: Video-bronchoscopy; Lung cancer diagnosis; Airway lumen detection; Region tracking; Guided bronchoscopy navigation
|
|
|
Debora Gil, Aura Hernandez-Sabate, David Castells, & Jordi Carrabina. (2017). "CYBERH: Cyber-Physical Systems in Health for Personalized Assistance " In International Symposium on Symbolic and Numeric Algorithms for Scientific Computing.
Abstract: Assistance systems for e-Health applications have some specific requirements that demand of new methods for data gathering, analysis and modeling able to deal with SmallData:
1) systems should dynamically collect data from, both, the environment and the user to issue personalized recommendations; 2) data analysis should be able to tackle a limited number of samples prone to include non-informative data and possibly evolving in time due to changes in patient condition; 3) algorithms should run in real time with possibly limited computational resources and fluctuant internet access.
Electronic medical devices (and CyberPhysical devices in general) can enhance the process of data gathering and analysis in several ways: (i) acquiring simultaneously multiple sensors data instead of single magnitudes (ii) filtering data; (iii) providing real-time implementations condition by isolating tasks in individual processors of multiprocessors Systems-on-chip (MPSoC) platforms and (iv) combining information through sensor fusion
techniques.
Our approach focus on both aspects of the complementary role of CyberPhysical devices and analysis of SmallData in the process of personalized models building for e-Health applications. In particular, we will address the design of Cyber-Physical Systems in Health for Personalized Assistance (CyberHealth) in two specific application cases: 1) A Smart Assisted Driving System (SADs) for dynamical assessment of the driving capabilities of Mild Cognitive Impaired (MCI) people; 2) An Intelligent Operating Room (iOR) for improving the yield of bronchoscopic interventions for in-vivo lung cancer diagnosis.
|
|
|
Debora Gil, Oriol Ramos Terrades, Elisa Minchole, Carles Sanchez, Noelia Cubero de Frutos, Marta Diez-Ferrer, et al. (2017). "Classification of Confocal Endomicroscopy Patterns for Diagnosis of Lung Cancer " In 6th Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging (Vol. 10550, pp. 151–159).
Abstract: Confocal Laser Endomicroscopy (CLE) is an emerging imaging technique that allows the in-vivo acquisition of cell patterns of potentially malignant lesions. Such patterns could discriminate between inflammatory and neoplastic lesions and, thus, serve as a first in-vivo biopsy to discard cases that do not actually require a cell biopsy.
The goal of this work is to explore whether CLE images obtained during videobronchoscopy contain enough visual information to discriminate between benign and malign peripheral lesions for lung cancer diagnosis. To do so, we have performed a pilot comparative study with 12 patients (6 adenocarcinoma and 6 benign-inflammatory) using 2 different methods for CLE pattern analysis: visual analysis by 3 experts and a novel methodology that uses graph methods to find patterns in pre-trained feature spaces. Our preliminary results indicate that although visual analysis can only achieve a 60.2% of accuracy, the accuracy of the proposed unsupervised image pattern classification raises to 84.6%.
We conclude that CLE images visual information allow in-vivo detection of neoplastic lesions and graph structural analysis applied to deep-learning feature spaces can achieve competitive results.
|
|