|
Frederic Sampedro, Anna Domenech, & Sergio Escalera. (2014). Static and dynamic computational cancer spread quantification in whole body FDG-PET/CT scans. JMIHI - Journal of Medical Imaging and Health Informatics, 4(6), 825–831.
Abstract: In this work we address the computational cancer spread quantification scenario in whole body FDG-PET/CT scans. At the static level, this setting can be modeled as a clustering problem on the set of 3D connected components of the whole body PET tumoral segmentation mask carried out by nuclear medicine physicians. At the dynamic level, and ad-hoc algorithm is proposed in order to quantify the cancer spread time evolution which, when combined with other existing indicators, gives rise to the metabolic tumor volume-aggressiveness-spread time evolution chart, a novel tool that we claim that would prove useful in nuclear medicine and oncological clinical or research scenarios. Good performance results of the proposed methodologies both at the clinical and technological level are shown using a dataset of 48 segmented whole body FDG-PET/CT scans.
Keywords: CANCER SPREAD; COMPUTER AIDED DIAGNOSIS; MEDICAL IMAGING; TUMOR QUANTIFICATION
|
|
|
Q. Bao, Marçal Rusiñol, M.Coustaty, Muhammad Muzzamil Luqman, C.D. Tran, & Jean-Marc Ogier. (2016). Delaunay triangulation-based features for Camera-based document image retrieval system. In 12th IAPR Workshop on Document Analysis Systems (pp. 1–6).
Abstract: In this paper, we propose a new feature vector, named DElaunay TRIangulation-based Features (DETRIF), for real-time camera-based document image retrieval. DETRIF is computed based on the geometrical constraints from each pair of adjacency triangles in delaunay triangulation which is constructed from centroids of connected components. Besides, we employ a hashing-based indexing system in order to evaluate the performance of DETRIF and to compare it with other systems such as LLAH and SRIF. The experimentation is carried out on two datasets comprising of 400 heterogeneous-content complex linguistic map images (huge size, 9800 X 11768 pixels resolution)and 700 textual document images.
Keywords: Camera-based Document Image Retrieval; Delaunay Triangulation; Feature descriptors; Indexing
|
|
|
Angel Sappa, Fadi Dornaika, Daniel Ponsa, David Geronimo, & Antonio Lopez. (2008). An Efficient Approach to Onboard Stereo Vision System Pose Estimation. TITS - IEEE Transactions on Intelligent Transportation Systems, 9(3), 476–490.
Abstract: This paper presents an efficient technique for estimating the pose of an onboard stereo vision system relative to the environment’s dominant surface area, which is supposed to be the road surface. Unlike previous approaches, it can be used either for urban or highway scenarios since it is not based on a specific visual traffic feature extraction but on 3-D raw data points. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact 2-D representation of the original 3-D data points is computed. Then, a RANdom SAmple Consensus (RANSAC) based least-squares approach is used to fit a plane to the road. Fast RANSAC fitting is obtained by selecting points according to a probability function that takes into account the density of points at a given depth. Finally, stereo camera height and pitch angle are computed related to the fitted road plane. The proposed technique is intended to be used in driverassistance systems for applications such as vehicle or pedestrian detection. Experimental results on urban environments, which are the most challenging scenarios (i.e., flat/uphill/downhill driving, speed bumps, and car’s accelerations), are presented. These results are validated with manually annotated ground truth. Additionally, comparisons with previous works are presented to show the improvements in the central processing unit processing time, as well as in the accuracy of the obtained results.
Keywords: Camera extrinsic parameter estimation, ground plane estimation, onboard stereo vision system
|
|
|
Daniela Rato, Miguel Oliveira, Vitor Santos, Manuel Gomes, & Angel Sappa. (2022). A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells. JMANUFSYST - Journal of Manufacturing Systems, 64, 497–507.
Abstract: Collaborative robotic industrial cells are workspaces where robots collaborate with human operators. In this context, safety is paramount, and for that a complete perception of the space where the collaborative robot is inserted is necessary. To ensure this, collaborative cells are equipped with a large set of sensors of multiple modalities, covering the entire work volume. However, the fusion of information from all these sensors requires an accurate extrinsic calibration. The calibration of such complex systems is challenging, due to the number of sensors and modalities, and also due to the small overlapping fields of view between the sensors, which are positioned to capture different viewpoints of the cell. This paper proposes a sensor to pattern methodology that can calibrate a complex system such as a collaborative cell in a single optimization procedure. Our methodology can tackle RGB and Depth cameras, as well as LiDARs. Results show that our methodology is able to accurately calibrate a collaborative cell containing three RGB cameras, a depth camera and three 3D LiDARs.
Keywords: Calibration; Collaborative cell; Multi-modal; Multi-sensor
|
|
|
Baiyu Chen, Sergio Escalera, Isabelle Guyon, Victor Ponce, N. Shah, & Marc Oliu. (2016). Overcoming Calibration Problems in Pattern Labeling with Pairwise Ratings: Application to Personality Traits. In 14th European Conference on Computer Vision Workshops.
Abstract: We address the problem of calibration of workers whose task is to label patterns with continuous variables, which arises for instance in labeling images of videos of humans with continuous traits. Worker bias is particularly dicult to evaluate and correct when many workers contribute just a few labels, a situation arising typically when labeling is crowd-sourced. In the scenario of labeling short videos of people facing a camera with personality traits, we evaluate the feasibility of the pairwise ranking method to alleviate bias problems. Workers are exposed to pairs of videos at a time and must order by preference. The variable levels are reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. This method may at first sight, seem prohibitively expensive because for N videos, p = N (N-1)/2 pairs must be potentially processed by workers rather that N videos. However, by performing extensive simulations, we determine an empirical law for the scaling of the number of pairs needed as a function of the number of videos in order to achieve a given accuracy of score reconstruction and show that the pairwise method is a ordable. We apply the method to the labeling of a large scale dataset of 10,000 videos used in the ChaLearn Apparent Personality Trait challenge.
Keywords: Calibration of labels; Label bias; Ordinal labeling; Variance Models; Bradley-Terry-Luce model; Continuous labels; Regression; Personality traits; Crowd-sourced labels
|
|
|
Debora Gil, Rosa Maria Ortiz, Carles Sanchez, & Antoni Rosell. (2018). Objective endoscopic measurements of central airway stenosis. A pilot study. RES - Respiration, 95, 63–69.
Abstract: Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI.
Keywords: Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis
|
|
|
Carles Sanchez, F. Javier Sanchez, Antoni Rosell, & Debora Gil. (2012). An illumination model of the trachea appearance in videobronchoscopy images. In Image Analysis and Recognition (Vol. 7325, pp. 313–320). LNCS. Springer Berlin Heidelberg.
Abstract: Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways. This imaging modality provides realistic images and allows non-invasive minimal intervention procedures. Tracheal procedures are routinary interventions that require assessment of the percentage of obstructed pathway for injury (stenosis) detection. Visual assessment in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error.
This paper introduces an automatic method for the estimation of steneosed trachea percentage reduction in videobronchoscopic images. We look for tracheal rings , whose deformation determines the degree of obstruction. For ring extraction , we present a ring detector based on an illumination and appearance model. This model allows us to parametrise the ring detection. Finally, we can infer optimal estimation parameters for any video resolution.
Keywords: Bronchoscopy, tracheal ring, stenosis assesment, trachea appearance model, segmentation
|
|
|
Carles Sanchez. (2011). Tracheal ring detection in bronchoscopy (F. J. S. Debora Gil, Ed.) (Vol. 168). Master's thesis, , .
Abstract: Endoscopy is the process in which a camera is introduced inside a human.
Given that endoscopy provides realistic images (in contrast to other modalities) and allows non-invase minimal intervention procedures (which can aid in diagnosis and surgical interventions), its use has spreaded during last decades.
In this project we will focus on bronchoscopic procedures, during which the camera is introduced through the trachea in order to have a diagnostic of the patient. The diagnostic interventions are focused on: degree of stenosis (reduction in tracheal area), prosthesis or early diagnosis of tumors. In the first case, assessment of the luminal area and the calculation of the diameters of the tracheal rings are required. A main limitation is that all the process is done by hand,
which means that the doctor takes all the measurements and decisions just by looking at the screen. As far as we know there is no computational framework for helping the doctors in the diagnosis.
This project will consist of analysing bronchoscopic videos in order to extract useful information for the diagnostic of the degree of stenosis. In particular we will focus on segmentation of the tracheal rings. As a result of this project several strategies (for detecting tracheal rings) had been implemented in order to compare their performance.
Keywords: Bronchoscopy, tracheal ring, segmentation
|
|
|
Carles Sanchez, Debora Gil, Jorge Bernal, F. Javier Sanchez, Marta Diez-Ferrer, & Antoni Rosell. (2016). Navigation Path Retrieval from Videobronchoscopy using Bronchial Branches. In 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshops (Vol. 9401, pp. 62–70). LNCS.
Abstract: Bronchoscopy biopsy can be used to diagnose lung cancer without risking complications of other interventions like transthoracic needle aspiration. During bronchoscopy, the clinician has to navigate through the bronchial tree to the target lesion. A main drawback is the difficulty to check whether the exploration is following the correct path. The usual guidance using fluoroscopy implies repeated radiation of the clinician, while alternative systems (like electromagnetic navigation) require specific equipment that increases intervention costs. We propose to compute the navigated path using anatomical landmarks extracted from the sole analysis of videobronchoscopy images. Such landmarks allow matching the current exploration to the path previously planned on a CT to indicate clinician whether the planning is being correctly followed or not. We present a feasibility study of our landmark based CT-video matching using bronchoscopic videos simulated on a virtual bronchoscopy interactive interface.
Keywords: Bronchoscopy navigation; Lumen center; Brochial branches; Navigation path; Videobronchoscopy
|
|
|
Juan Jose Rubio, Takahiro Kashiwa, Teera Laiteerapong, Wenlong Deng, Kohei Nagai, Sergio Escalera, et al. (2019). Multi-class structural damage segmentation using fully convolutional networks. COMPUTIND - Computers in Industry, 112, 103121.
Abstract: Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%.
Keywords: Bridge damage detection; Deep learning; Semantic segmentation
|
|
|
Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, et al. (2018). Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multiparametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e. 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in preoperative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that undergone gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
Keywords: BraTS; challenge; brain; tumor; segmentation; machine learning; glioma; glioblastoma; radiomics; survival; progression; RECIST
|
|
|
Santi Puch, Irina Sanchez, Aura Hernandez-Sabate, Gemma Piella, & Vesna Prckovska. (2018). Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation. In International MICCAI Brainlesion Workshop (Vol. 11384, pp. 393–405). LNCS.
Abstract: In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge.
Keywords: Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution
|
|
|
Laura Igual, Joan Carles Soliva, Antonio Hernandez, Sergio Escalera, Xavier Jimenez, Oscar Vilarroya, et al. (2011). A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder. BEO - BioMedical Engineering Online, 10(105), 1–23.
Abstract: Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations.
Method
We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure.
Results
We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis.
Conclusion
CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Keywords: Brain caudate nucleus; segmentation; MRI; atlas-based strategy; Graph Cut framework
|
|
|
Arash Akbarinia, & C. Alejandro Parraga. (2018). Feedback and Surround Modulated Boundary Detection. IJCV - International Journal of Computer Vision, 126(12), 1367–1380.
Abstract: Edges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The “classical approach” assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal from V1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods.
Keywords: Boundary detection; Surround modulation; Biologically-inspired vision
|
|
|
Esmitt Ramirez, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2018). Image-Based Bronchial Anatomy Codification for Biopsy Guiding in Video Bronchoscopy. In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis (Vol. 11041). LNCS.
Abstract: Bronchoscopy examinations allow biopsy of pulmonary nodules with minimum risk for the patient. Even for experienced bronchoscopists, it is difficult to guide the bronchoscope to most distal lesions and obtain an accurate diagnosis. This paper presents an image-based codification of the bronchial anatomy for bronchoscopy biopsy guiding. The 3D anatomy of each patient is codified as a binary tree with nodes representing bronchial levels and edges labeled using their position on images projecting the 3D anatomy from a set of branching points. The paths from the root to leaves provide a codification of navigation routes with spatially consistent labels according to the anatomy observes in video bronchoscopy explorations. We evaluate our labeling approach as a guiding system in terms of the number of bronchial levels correctly codified, also in the number of labels-based instructions correctly supplied, using generalized mixed models and computer-generated data. Results obtained for three independent observers prove the consistency and reproducibility of our guiding system. We trust that our codification based on viewer’s projection might be used as a foundation for the navigation process in Virtual Bronchoscopy systems.
Keywords: Biopsy guiding; Bronchoscopy; Lung biopsy; Intervention guiding; Airway codification
|
|