|
Antonio Hernandez, Carlo Gatta, Sergio Escalera, Laura Igual, Victoria Martin Yuste, & Petia Radeva. (2011). Accurate and Robust Fully-Automatic QCA: Method and Numerical Validation. In 14th International Conference on Medical Image Computing and Computer Assisted Intervention (Vol. 14, pp. 496–503). Springer.
Abstract: The Quantitative Coronary Angiography (QCA) is a methodology used to evaluate the arterial diseases and, in particular, the degree of stenosis. In this paper we propose AQCA, a fully automatic method for vessel segmentation based on graph cut theory. Vesselness, geodesic paths and a new multi-scale edgeness map are used to compute a globally optimal artery segmentation. We evaluate the method performance in a rigorous numerical way on two datasets. The method can detect an artery with precision 92.9 +/- 5% and sensitivity 94.2 +/- 6%. The average absolute distance error between detected and ground truth centerline is 1.13 +/- 0.11 pixels (about 0.27 +/- 0.025 mm) and the absolute relative error in the vessel caliber estimation is 2.93% with almost no bias. Moreover, the method can discriminate between arteries and catheter with an accuracy of 96.4%.
|
|
|
Sergio Vera, Miguel Angel Gonzalez Ballester, & Debora Gil. (2013). Volumetric Anatomical Parameterization and Meshing for Inter-patient Liver Coordinate System Deffinition. In 16th International Conference on Medical Image Computing and Computer Assisted Intervention.
|
|
|
Francesco Ciompi, Simone Balocco, Carles Caus, J. Mauri, & Petia Radeva. (2013). Stent shape estimation through a comprehensive interpretation of intravascular ultrasound images. In 16th International Conference on Medical Image Computing and Computer Assisted Intervention (Vol. 8150, pp. 345–352). LNCS. Springer Berlin Heidelberg.
Abstract: We present a method for automatic struts detection and stent shape estimation in cross-sectional intravascular ultrasound images. A stent shape is first estimated through a comprehensive interpretation of the vessel morphology, performed using a supervised context-aware multi-class classification scheme. Then, the successive strut identification exploits both local appearance and the defined stent shape. The method is tested on 589 images obtained from 80 patients, achieving a F-measure of 74.1% and an averaged distance between manual and automatic struts of 0.10 mm.
|
|
|
Marina Alberti, Simone Balocco, Xavier Carrillo, J. Mauri, & Petia Radeva. (2012). Automatic Non-Rigid Temporal Alignment of IVUS Sequences. In 15th International Conference on Medical Image Computing and Computer Assisted Intervention (Vol. 1, pp. 642–650). Springer-Verlag Berlin, Heidelberg.
Abstract: Clinical studies on atherosclerosis regression/progression performed by Intravascular Ultrasound analysis require the alignment of pullbacks of the same patient before and after clinical interventions. In this paper, a methodology for the automatic alignment of IVUS sequences based on the Dynamic Time Warping technique is proposed. The method is adapted to the specific IVUS alignment task by applying the non-rigid alignment technique to multidimensional morphological signals, and by introducing a sliding window approach together with a regularization term. To show the effectiveness of our method, an extensive validation is performed both on synthetic data and in-vivo IVUS sequences. The proposed method is robust to stent deployment and post dilation surgery and reaches an alignment error of approximately 0.7 mm for in-vivo data, which is comparable to the inter-observer variability.
|
|
|
Md. Mostafa Kamal Sarker, Hatem A. Rashwan, Farhan Akram, Syeda Furruka Banu, Adel Saleh, Vivek Kumar Singh, et al. (2018). SLSDeep: Skin Lesion Segmentation Based on Dilated Residual and Pyramid Pooling Networks. In 21st International Conference on Medical Image Computing & Computer Assisted Intervention (Vol. 2, pp. 21–29).
Abstract: Skin lesion segmentation (SLS) in dermoscopic images is a crucial task for automated diagnosis of melanoma. In this paper, we present a robust deep learning SLS model, so-called SLSDeep, which is represented as an encoder-decoder network. The encoder network is constructed by dilated residual layers, in turn, a pyramid pooling network followed by three convolution layers is used for the decoder. Unlike the traditional methods employing a cross-entropy loss, we investigated a loss function by combining both Negative Log Likelihood (NLL) and End Point Error (EPE) to accurately segment the melanoma regions with sharp boundaries. The robustness of the proposed model was evaluated on two public databases: ISBI 2016 and 2017 for skin lesion analysis towards melanoma detection challenge. The proposed model outperforms the state-of-the-art methods in terms of segmentation accuracy. Moreover, it is capable to segment more than 100 images of size 384x384 per second on a recent GPU.
|
|
|
Panagiota Spyridonos, Fernando Vilariño, Jordi Vitria, Fernando Azpiroz, & Petia Radeva. (2006). Anisotropic Feature Extraction from Endoluminal Images for Detection of Intestinal Contractions. In and J. Sporring M. N. R. Larsen (Ed.), 9th International Conference on Medical Image Computing and Computer–Assisted Intervention (Vol. 4191, 161–168). LNCS. Berlin Heidelberg: Springer Verlag.
Abstract: Wireless endoscopy is a very recent and at the same time unique technique allowing to visualize and study the occurrence of con- tractions and to analyze the intestine motility. Feature extraction is es- sential for getting efficient patterns to detect contractions in wireless video endoscopy of small intestine. We propose a novel method based on anisotropic image filtering and efficient statistical classification of con- traction features. In particular, we apply the image gradient tensor for mining informative skeletons from the original image and a sequence of descriptors for capturing the characteristic pattern of contractions. Fea- tures extracted from the endoluminal images were evaluated in terms of their discriminatory ability in correct classifying images as either belong- ing to contractions or not. Classification was performed by means of a support vector machine classifier with a radial basis function kernel. Our classification rates gave sensitivity of the order of 90.84% and specificity of the order of 94.43% respectively. These preliminary results highlight the high efficiency of the selected descriptors and support the feasibility of the proposed method in assisting the automatic detection and analysis of contractions.
|
|
|
Jose Marone, Simone Balocco, Marc Bolaños, Jose Massa, & Petia Radeva. (2016). Learning the Lumen Border using a Convolutional Neural Networks classifier. In 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshop.
Abstract: IntraVascular UltraSound (IVUS) is a technique allowing the diagnosis of coronary plaque. An accurate (semi-)automatic assessment of the luminal contours could speed up the diagnosis. In most of the approaches, the information on the vessel shape is obtained combining a supervised learning step with a local refinement algorithm. In this paper, we explore for the first time, the use of a Convolutional Neural Networks (CNN) architecture that on one hand is able to extract the optimal image features and at the same time can serve as a supervised classifier to detect the lumen border in IVUS images. The main limitation of CNN, relies on the fact that this technique requires a large amount of training data due to the huge amount of parameters that it has. To
solve this issue, we introduce a patch classification approach to generate an extended training-set from a few annotated images. An accuracy of 93% and F-score of 71% was obtained with this technique, even when it was applied to challenging frames containig calcified plaques, stents and catheter shadows.
|
|
|
Antonio Esteban Lansaque, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2016). Stable Anatomical Structure Tracking for video-bronchoscopy Navigation. In 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshops.
Abstract: Bronchoscopy allows to examine the patient airways for detection of lesions and sampling of tissues without surgery. A main drawback in lung cancer diagnosis is the diculty to check whether the exploration is following the correct path to the nodule that has to be biopsied. The most extended guidance uses uoroscopy which implies repeated radiation of clinical sta and patients. Alternatives such as virtual bronchoscopy or electromagnetic navigation are very expensive and not completely robust to blood, mocus or deformations as to be extensively used. We propose a method that extracts and tracks stable lumen regions at dierent levels of the bronchial tree. The tracked regions are stored in a tree that encodes the anatomical structure of the scene which can be useful to retrieve the path to the lesion that the clinician should follow to do the biopsy. We present a multi-expert validation of our anatomical landmark extraction in 3 intra-operative ultrathin explorations.
Keywords: Lung cancer diagnosis; video-bronchoscopy; airway lumen detection; region tracking
|
|
|
Carles Sanchez, Debora Gil, Jorge Bernal, F. Javier Sanchez, Marta Diez-Ferrer, & Antoni Rosell. (2016). Navigation Path Retrieval from Videobronchoscopy using Bronchial Branches. In 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshops (Vol. 9401, pp. 62–70). LNCS.
Abstract: Bronchoscopy biopsy can be used to diagnose lung cancer without risking complications of other interventions like transthoracic needle aspiration. During bronchoscopy, the clinician has to navigate through the bronchial tree to the target lesion. A main drawback is the difficulty to check whether the exploration is following the correct path. The usual guidance using fluoroscopy implies repeated radiation of the clinician, while alternative systems (like electromagnetic navigation) require specific equipment that increases intervention costs. We propose to compute the navigated path using anatomical landmarks extracted from the sole analysis of videobronchoscopy images. Such landmarks allow matching the current exploration to the path previously planned on a CT to indicate clinician whether the planning is being correctly followed or not. We present a feasibility study of our landmark based CT-video matching using bronchoscopic videos simulated on a virtual bronchoscopy interactive interface.
Keywords: Bronchoscopy navigation; Lumen center; Brochial branches; Navigation path; Videobronchoscopy
|
|
|
Simone Balocco, Francesco Ciompi, Juan Rigla, Xavier Carrillo, J. Mauri, & Petia Radeva. (2017). Intra-Coronary Stent localization In Intravascular Ultrasound Sequences, A Preliminary Study. In International workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting (CVII-STENT). LNCS.
Abstract: An intraluminal coronary stent is a metal scaold deployed in a stenotic artery during Percutaneous Coronary Intervention (PCI).
Intravascular Ultrasound (IVUS) is a catheter-based imaging technique generally used for assessing the correct placement of the stent. All the approaches proposed so far for the stent analysis only focused on the struts detection, while this paper proposes a novel approach to detect the boundaries and the position of the stent along the pullback.
The pipeline of the method requires the identication of the stable frames
of the sequence and the reliable detection of stent struts. Using this data,
a measure of likelihood for a frame to contain a stent is computed. Then,
a robust binary representation of the presence of the stent in the pullback
is obtained applying an iterative and multi-scale approximation of the signal to symbols using the SAX algorithm. Results obtained comparing the automatic results versus the manual annotation of two observers on 80 IVUS in-vivo sequences shows that the method approaches the inter-observer variability scores.
|
|
|
Esmitt Ramirez, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2018). Image-Based Bronchial Anatomy Codification for Biopsy Guiding in Video Bronchoscopy. In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis (Vol. 11041). LNCS.
Abstract: Bronchoscopy examinations allow biopsy of pulmonary nodules with minimum risk for the patient. Even for experienced bronchoscopists, it is difficult to guide the bronchoscope to most distal lesions and obtain an accurate diagnosis. This paper presents an image-based codification of the bronchial anatomy for bronchoscopy biopsy guiding. The 3D anatomy of each patient is codified as a binary tree with nodes representing bronchial levels and edges labeled using their position on images projecting the 3D anatomy from a set of branching points. The paths from the root to leaves provide a codification of navigation routes with spatially consistent labels according to the anatomy observes in video bronchoscopy explorations. We evaluate our labeling approach as a guiding system in terms of the number of bronchial levels correctly codified, also in the number of labels-based instructions correctly supplied, using generalized mixed models and computer-generated data. Results obtained for three independent observers prove the consistency and reproducibility of our guiding system. We trust that our codification based on viewer’s projection might be used as a foundation for the navigation process in Virtual Bronchoscopy systems.
Keywords: Biopsy guiding; Bronchoscopy; Lung biopsy; Intervention guiding; Airway codification
|
|
|
Santi Puch, Irina Sanchez, Aura Hernandez-Sabate, Gemma Piella, & Vesna Prckovska. (2018). Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation. In International MICCAI Brainlesion Workshop (Vol. 11384, pp. 393–405). LNCS.
Abstract: In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge.
Keywords: Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution
|
|
|
Carlos Martin-Isla, Maryam Asadi-Aghbolaghi, Polyxeni Gkontra, Victor M. Campello, Sergio Escalera, & Karim Lekadir. (2020). Stacked BCDU-net with semantic CMR synthesis: application to Myocardial Pathology Segmentation challenge. In MYOPS challenge and workshop.
|
|
|
Smriti Joshi, Richard Osuala, Carlos Martin-Isla, Victor M.Campello, Carla Sendra-Balcells, Karim Lekadir, et al. (2022). nn-UNet Training on CycleGAN-Translated Images for Cross-modal Domain Adaptation in Biomedical Imaging. In International MICCAI Brainlesion Workshop (Vol. 12963, 540–551). LNCS.
Abstract: In recent years, deep learning models have considerably advanced the performance of segmentation tasks on Brain Magnetic Resonance Imaging (MRI). However, these models show a considerable performance drop when they are evaluated on unseen data from a different distribution. Since annotation is often a hard and costly task requiring expert supervision, it is necessary to develop ways in which existing models can be adapted to the unseen domains without any additional labelled information. In this work, we explore one such technique which extends the CycleGAN [2] architecture to generate label-preserving data in the target domain. The synthetic target domain data is used to train the nn-UNet [3] framework for the task of multi-label segmentation. The experiments are conducted and evaluated on the dataset [1] provided in the ‘Cross-Modality Domain Adaptation for Medical Image Segmentation’ challenge [23] for segmentation of vestibular schwannoma (VS) tumour and cochlea on contrast enhanced (ceT1) and high resolution (hrT2) MRI scans. In the proposed approach, our model obtains dice scores (DSC) 0.73 and 0.49 for tumour and cochlea respectively on the validation set of the dataset. This indicates the applicability of the proposed technique to real-world problems where data may be obtained by different acquisition protocols as in [1] where hrT2 images are more reliable, safer, and lower-cost alternative to ceT1.
Keywords: Domain adaptation; Vestibular schwannoma (VS); Deep learning; nn-UNet; CycleGAN
|
|
|
Yael Tudela, Ana Garcia Rodriguez, Gloria Fernandez Esparrach, & Jorge Bernal. (2023). Towards Fine-Grained Polyp Segmentation and Classification. In Workshop on Clinical Image-Based Procedures (Vol. 14242, pp. 32–42). LNCS.
Abstract: Colorectal cancer is one of the main causes of cancer death worldwide. Colonoscopy is the gold standard screening tool as it allows lesion detection and removal during the same procedure. During the last decades, several efforts have been made to develop CAD systems to assist clinicians in lesion detection and classification. Regarding the latter, and in order to be used in the exploration room as part of resect and discard or leave-in-situ strategies, these systems must identify correctly all different lesion types. This is a challenging task, as the data used to train these systems presents great inter-class similarity, high class imbalance, and low representation of clinically relevant histology classes such as serrated sessile adenomas.
In this paper, a new polyp segmentation and classification method, Swin-Expand, is introduced. Based on Swin-Transformer, it uses a simple and lightweight decoder. The performance of this method has been assessed on a novel dataset, comprising 1126 high-definition images representing the three main histological classes. Results show a clear improvement in both segmentation and classification performance, also achieving competitive results when tested in public datasets. These results confirm that both the method and the data are important to obtain more accurate polyp representations.
Keywords: Medical image segmentation; Colorectal Cancer; Vision Transformer; Classification
|
|