Fosca De Iorio, Carolina Malagelada, Fernando Azpiroz, M. Maluenda, C. Violanti, Laura Igual, et al. (2009). Intestinal motor activity, endoluminal motion and transit. NEUMOT - Neurogastroenterology & Motility, 21(12), 1264–e119.
Abstract: A programme for evaluation of intestinal motility has been recently developed based on endoluminal image analysis using computer vision methodology and machine learning techniques. Our aim was to determine the effect of intestinal muscle inhibition on wall motion, dynamics of luminal content and transit in the small bowel. Fourteen healthy subjects ingested the endoscopic capsule (Pillcam, Given Imaging) in fasting conditions. Seven of them received glucagon (4.8 microg kg(-1) bolus followed by a 9.6 microg kg(-1) h(-1) infusion during 1 h) and in the other seven, fasting activity was recorded, as controls. This dose of glucagon has previously shown to inhibit both tonic and phasic intestinal motor activity. Endoluminal image and displacement was analyzed by means of a computer vision programme specifically developed for the evaluation of muscular activity (contractile and non-contractile patterns), intestinal contents, endoluminal motion and transit. Thirty-minute periods before, during and after glucagon infusion were analyzed and compared with equivalent periods in controls. No differences were found in the parameters measured during the baseline (pretest) periods when comparing glucagon and control experiments. During glucagon infusion, there was a significant reduction in contractile activity (0.2 +/- 0.1 vs 4.2 +/- 0.9 luminal closures per min, P < 0.05; 0.4 +/- 0.1 vs 3.4 +/- 1.2% of images with radial wrinkles, P < 0.05) and a significant reduction of endoluminal motion (82 +/- 9 vs 21 +/- 10% of static images, P < 0.05). Endoluminal image analysis, by means of computer vision and machine learning techniques, can reliably detect reduced intestinal muscle activity and motion.
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). Re-coding ECOCs without retraining. PRL - Pattern Recognition Letters, 31(7), 555–562.
Abstract: A standard way to deal with multi-class categorization problems is by the combination of binary classifiers in a pairwise voting procedure. Recently, this classical approach has been formalized in the Error-Correcting Output Codes (ECOC) framework. In the ECOC framework, the one-versus-one coding demonstrates to achieve higher performance than the rest of coding designs. The binary problems that we train in the one-versus-one strategy are significantly smaller than in the rest of designs, and usually easier to be learnt, taking into account the smaller overlapping between classes. However, a high percentage of the positions coded by zero of the coding matrix, which implies a high sparseness degree, does not codify meta-class membership information. In this paper, we show that using the training data we can redefine without re-training, in a problem-dependent way, the one-versus-one coding matrix so that the new coded information helps the system to increase its generalization capability. Moreover, the new re-coding strategy is generalized to be applied over any binary code. The results over several UCI Machine Learning repository data sets and two real multi-class problems show that performance improvements can be obtained re-coding the classical one-versus-one and Sparse random designs compared to different state-of-the-art ECOC configurations.
|
Jaume Garcia, Debora Gil, Luis Badiella, Aura Hernandez-Sabate, Francesc Carreras, Sandra Pujades, et al. (2010). A Normalized Framework for the Design of Feature Spaces Assessing the Left Ventricular Function. TMI - IEEE Transactions on Medical Imaging, 29(3), 733–745.
Abstract: A through description of the left ventricle functionality requires combining complementary regional scores. A main limitation is the lack of multiparametric normality models oriented to the assessment of regional wall motion abnormalities (RWMA). This paper covers two main topics involved in RWMA assessment. We propose a general framework allowing the fusion and comparison across subjects of different regional scores. Our framework is used to explore which combination of regional scores (including 2-D motion and strains) is better suited for RWMA detection. Our statistical analysis indicates that for a proper (within interobserver variability) identification of RWMA, models should consider motion and extreme strains.
|
Luca Ginanni Corradini, Simone Balocco, Luciano Maresca, Silvio Vitale, & Matteo Stefanini. (2023). Anatomical Modifications After Stent Implantation: A Comparative Analysis Between CGuard, Wallstent, and Roadsaver Carotid Stents. Journal of Endovascular Therapy, 30(1), 18–24.
Abstract: Abstract
Purpose:
Carotid revascularization can be associated with modifications of the vascular geometry, which may lead to complications. The changes on the vessel angulation before and after a carotid WallStent (WS) implantation are compared against 2 new dual-layer devices, CGuard (CG) and RoadSaver (RS).
Materials and Methods:
The study prospectively recruited 217 consecutive patients (112 GC, 73 WS, and 32 RS, respectively). Angiography projections were explored and the one having a higher arterial angle was selected as a basal view. After stent implantation, a stent control angiography was performed selecting the projection having the maximal angle. The same procedure is followed in all the 3 stent types to guarantee comparable conditions. The angulation changes on the stented segments were quantified from both angiographies. The statistical analysis quantitatively compared the pre-and post-angles for the 3 stent types. The results are qualitatively illustrated using boxplots. Finally, the relation between pre- and post-angles measurements is analyzed using linear regression.
Results:
For CG, no statistical difference in the axial vessel geometry between the basal and postprocedural angles was found. For WS and RS, statistical difference was found between pre- and post-angles. The regression analysis shows that CG induces lower changes from the original curvature with respect to WS and RS.
Conclusion:
Based on our results, CG determines minor changes over the basal morphology than WS and RS stents. Hence, CG respects better the native vessel anatomy than the other stents.
Level of Evidence: Level 4, Case Series.
Keywords: Ginanni Corradini L, Balocco S, Maresca L, Vitale S, Stefanini M.
|
Marta Diez-Ferrer, Debora Gil, Cristian Tebe, & Carles Sanchez. (2018). Positive Airway Pressure to Enhance Computed Tomography Imaging for Airway Segmentation for Virtual Bronchoscopic Navigation. RES - Respiration, 96(6), 525–534.
Abstract: Abstract
RATIONALE:
Virtual bronchoscopic navigation (VBN) guidance to peripheral pulmonary lesions is often limited by insufficient segmentation of the peripheral airways.
OBJECTIVES:
To test the effect of applying positive airway pressure (PAP) during CT acquisition to improve segmentation, particularly at end-expiration.
METHODS:
CT acquisitions in inspiration and expiration with 4 PAP protocols were recorded prospectively and compared to baseline inspiratory acquisitions in 20 patients. The 4 protocols explored differences between devices (flow vs. turbine), exposures (within seconds vs. 15-min) and pressure levels (10 vs. 14 cmH2O). Segmentation quality was evaluated with the number of airways and number of endpoints reached. A generalized mixed-effects model explored the estimated effect of each protocol.
MEASUREMENTS AND MAIN RESULTS:
Patient characteristics and lung function did not significantly differ between protocols. Compared to baseline inspiratory acquisitions, expiratory acquisitions after 15 min of 14 cmH2O PAP segmented 1.63-fold more airways (95% CI 1.07-2.48; p = 0.018) and reached 1.34-fold more endpoints (95% CI 1.08-1.66; p = 0.004). Inspiratory acquisitions performed immediately under 10 cmH2O PAP reached 1.20-fold (95% CI 1.09-1.33; p < 0.001) more endpoints; after 15 min the increase was 1.14-fold (95% CI 1.05-1.24; p < 0.001).
CONCLUSIONS:
CT acquisitions with PAP segment more airways and reach more endpoints than baseline inspiratory acquisitions. The improvement is particularly evident at end-expiration after 15 min of 14 cmH2O PAP. Further studies must confirm that the improvement increases diagnostic yield when using VBN to evaluate peripheral pulmonary lesions.
Keywords: Multidetector computed tomography; Bronchoscopy; Continuous positive airway pressure; Image enhancement; Virtual bronchoscopic navigation
|
Jorge Bernal, Aymeric Histace, Marc Masana, Quentin Angermann, Cristina Sanchez Montes, Cristina Rodriguez de Miguel, et al. (2019). GTCreator: a flexible annotation tool for image-based datasets. IJCAR - International Journal of Computer Assisted Radiology and Surgery, 14(2), 191–201.
Abstract: Abstract Purpose: Methodology evaluation for decision support systems for health is a time consuming-task. To assess performance of polyp detection
methods in colonoscopy videos, clinicians have to deal with the annotation
of thousands of images. Current existing tools could be improved in terms of
exibility and ease of use. Methods:We introduce GTCreator, a exible annotation tool for providing image and text annotations to image-based datasets.
It keeps the main basic functionalities of other similar tools while extending
other capabilities such as allowing multiple annotators to work simultaneously
on the same task or enhanced dataset browsing and easy annotation transfer aiming to speed up annotation processes in large datasets. Results: The
comparison with other similar tools shows that GTCreator allows to obtain
fast and precise annotation of image datasets, being the only one which offers
full annotation editing and browsing capabilites. Conclusions: Our proposed
annotation tool has been proven to be efficient for large image dataset annota-
tion, as well as showing potential of use in other stages of method evaluation
such as experimental setup or results analysis.
Keywords: Annotation tool; Validation Framework; Benchmark; Colonoscopy; Evaluation
|
O. Fors, J. Nuñez, Xavier Otazu, A. Prades, & Robert D. Cardinal. (2010). Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques. SENS - Sensors, 10(3), 1743–1752.
Abstract: Abstract: In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Keywords: image processing; image deconvolution; faint stars; space debris; wavelet transform
|
Jose Elias Yauri, M. Lagos, H. Vega-Huerta, P. de-la-Cruz, G.L.E Maquen-Niño, & E. Condor-Tinoco. (2023). Detection of Epileptic Seizures Based-on Channel Fusion and Transformer Network in EEG Recordings. IJACSA - International Journal of Advanced Computer Science and Applications, 14(5), 1067–1074.
Abstract: According to the World Health Organization, epilepsy affects more than 50 million people in the world, and specifically, 80% of them live in developing countries. Therefore, epilepsy has become among the major public issue for many governments and deserves to be engaged. Epilepsy is characterized by uncontrollable seizures in the subject due to a sudden abnormal functionality of the brain. Recurrence of epilepsy attacks change people’s lives and interferes with their daily activities. Although epilepsy has no cure, it could be mitigated with an appropriated diagnosis and medication. Usually, epilepsy diagnosis is based on the analysis of an electroencephalogram (EEG) of the patient. However, the process of searching for seizure patterns in a multichannel EEG recording is a visual demanding and time consuming task, even for experienced neurologists. Despite the recent progress in automatic recognition of epilepsy, the multichannel nature of EEG recordings still challenges current methods. In this work, a new method to detect epilepsy in multichannel EEG recordings is proposed. First, the method uses convolutions to perform channel fusion, and next, a self-attention network extracts temporal features to classify between interictal and ictal epilepsy states. The method was validated in the public CHB-MIT dataset using the k-fold cross-validation and achieved 99.74% of specificity and 99.15% of sensitivity, surpassing current approaches.
Keywords: Epilepsy; epilepsy detection; EEG; EEG channel fusion; convolutional neural network; self-attention
|
Francesco Ciompi, Oriol Pujol, Carlo Gatta, Oriol Rodriguez-Leor, J. Mauri, & Petia Radeva. (2010). Fusing in-vitro and in-vivo intravascular ultrasound data for plaque characterization. IJCI - International Journal of Cardiovascular Imaging, 26(7), 763–779.
Abstract: Accurate detection of in-vivo vulnerable plaque in coronary arteries is still an open problem. Recent studies show that it is highly related to tissue structure and composition. Intravascular Ultrasound (IVUS) is a powerful imaging technique that gives a detailed cross-sectional image of the vessel, allowing to explore arteries morphology. IVUS data validation is usually performed by comparing post-mortem (in-vitro) IVUS data and corresponding histological analysis of the tissue. The main drawback of this method is the few number of available case studies and validated data due to the complex procedure of histological analysis of the tissue. On the other hand, IVUS data from in-vivo cases is easy to obtain but it can not be histologically validated. In this work, we propose to enhance the in-vitro training data set by selectively including examples from in-vivo plaques. For this purpose, a Sequential Floating Forward Selection method is reformulated in the context of plaque characterization. The enhanced classifier performance is validated on in-vitro data set, yielding an overall accuracy of 91.59% in discriminating among fibrotic, lipidic and calcified plaques, while reducing the gap between in-vivo and in-vitro data analysis. Experimental results suggest that the obtained classifier could be properly applied on in-vivo plaque characterization and also demonstrate that the common hypothesis of assuming the difference between in-vivo and in-vitro as negligible is incorrect.
|
Danna Xue, Javier Vazquez, Luis Herranz, Yang Zhang, & Michael S Brown. (2023). Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring. CGF - Computer Graphics Forum, .
Abstract: Achieving visually consistent colors across multiple images is important when images are used in photo albums, websites, and brochures. Unfortunately, only a handful of methods address multi-image color consistency compared to one-to-one color transfer techniques. Furthermore, existing methods do not incorporate high-level features that can assist graphic designers in their work. To address these limitations, we introduce a framework that builds upon a previous palette-based color consistency method and incorporates three high-level features: white balance, saliency, and color naming. We show how these features overcome the limitations of the prior multi-consistency workflow and showcase the user-friendly nature of our framework.
|
David Rotger, Misael Rosales, Jaume Garcia, Oriol Pujol, J. Mauri, & Petia Radeva. (2003). Active Vessel: A New Multimedia Workstation for Intravascular Ultrasound and Angiography Fusion. Computers in Cardiology, 30, 65–68.
Abstract: AcriveVessel is a new multimedia workstation which enables the visualization, acquisition and handling of both image modalities, on- and ofline. It enables DICOM v3.0 decompression and browsing, video acquisition,repmduction and storage for IntraVascular UltraSound (IVUS) and angiograms with their corresponding ECG,automatic catheter segmentation in angiography images (using fast marching algorithm). BSpline models definition for vessel layers on IVUS images sequence and an extensively validated tool to fuse information. This approach defines the correspondence of every IVUS image with its correspondent point in the angiogram and viceversa. The 3 0 reconstruction of the NUS catheterhessel enables real distance measurements as well as threedimensional visualization showing vessel tortuosity in the space.
|
Fahad Shahbaz Khan, Jiaolong Xu, Muhammad Anwer Rao, Joost Van de Weijer, Andrew Bagdanov, & Antonio Lopez. (2015). Recognizing Actions through Action-specific Person Detection. TIP - IEEE Transactions on Image Processing, 24(11), 4422–4432.
Abstract: Action recognition in still images is a challenging problem in computer vision. To facilitate comparative evaluation independently of person detection, the standard evaluation protocol for action recognition uses an oracle person detector to obtain perfect bounding box information at both training and test time. The assumption is that, in practice, a general person detector will provide candidate bounding boxes for action recognition. In this paper, we argue that this paradigm is suboptimal and that action class labels should already be considered during the detection stage. Motivated by the observation that body pose is strongly conditioned on action class, we show that: 1) the existing state-of-the-art generic person detectors are not adequate for proposing candidate bounding boxes for action classification; 2) due to limited training examples, the direct training of action-specific person detectors is also inadequate; and 3) using only a small number of labeled action examples, the transfer learning is able to adapt an existing detector to propose higher quality bounding boxes for subsequent action classification. To the best of our knowledge, we are the first to investigate transfer learning for the task of action-specific person detection in still images. We perform extensive experiments on two benchmark data sets: 1) Stanford-40 and 2) PASCAL VOC 2012. For the action detection task (i.e., both person localization and classification of the action performed), our approach outperforms methods based on general person detection by 5.7% mean average precision (MAP) on Stanford-40 and 2.1% MAP on PASCAL VOC 2012. Our approach also significantly outperforms the state of the art with a MAP of 45.4% on Stanford-40 and 31.4% on PASCAL VOC 2012. We also evaluate our action detection approach for the task of action classification (i.e., recognizing actions without localizing them). For this task, our approach, without using any ground-truth person localization at test tim- , outperforms on both data sets state-of-the-art methods, which do use person locations.
|
Ikechukwu Ofodile, Ahmed Helmi, Albert Clapes, Egils Avots, Kerttu Maria Peensoo, Sandhra Mirella Valdma, et al. (2019). Action recognition using single-pixel time-of-flight detection. ENTROPY - Entropy, 21(4), 414.
Abstract: Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject’s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene.
Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47% accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent
neural network.
Keywords: single pixel single photon image acquisition; time-of-flight; action recognition
|
Alejandro Cartas, Petia Radeva, & Mariella Dimiccoli. (2020). Activities of Daily Living Monitoring via a Wearable Camera: Toward Real-World Applications. ACCESS - IEEE Access, 8, 77344–77363.
Abstract: Activity recognition from wearable photo-cameras is crucial for lifestyle characterization and health monitoring. However, to enable its wide-spreading use in real-world applications, a high level of generalization needs to be ensured on unseen users. Currently, state-of-the-art methods have been tested only on relatively small datasets consisting of data collected by a few users that are partially seen during training. In this paper, we built a new egocentric dataset acquired by 15 people through a wearable photo-camera and used it to test the generalization capabilities of several state-of-the-art methods for egocentric activity recognition on unseen users and daily image sequences. In addition, we propose several variants to state-of-the-art deep learning architectures, and we show that it is possible to achieve 79.87% accuracy on users unseen during training. Furthermore, to show that the proposed dataset and approach can be useful in real-world applications, where data can be acquired by different wearable cameras and labeled data are scarcely available, we employed a domain adaptation strategy on two egocentric activity recognition benchmark datasets. These experiments show that the model learned with our dataset, can easily be transferred to other domains with a very small amount of labeled data. Taken together, those results show that activity recognition from wearable photo-cameras is mature enough to be tested in real-world applications.
|
David Geronimo, Antonio Lopez, Angel Sappa, & Thorsten Graf. (2010). Survey on Pedestrian Detection for Advanced Driver Assistance Systems. TPAMI - IEEE Transaction on Pattern Analysis and Machine Intelligence, 32(7), 1239–1258.
Abstract: Advanced driver assistance systems (ADASs), and particularly pedestrian protection systems (PPSs), have become an active research area aimed at improving traffic safety. The major challenge of PPSs is the development of reliable on-board pedestrian detection systems. Due to the varying appearance of pedestrians (e.g., different clothes, changing size, aspect ratio, and dynamic shape) and the unstructured environment, it is very difficult to cope with the demanded robustness of this kind of system. Two problems arising in this research area are the lack of public benchmarks and the difficulty in reproducing many of the proposed methods, which makes it difficult to compare the approaches. As a result, surveying the literature by enumerating the proposals one-after-another is not the most useful way to provide a comparative point of view. Accordingly, we present a more convenient strategy to survey the different approaches. We divide the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities. Then, the different proposed methods are analyzed and classified with respect to each processing stage, favoring a comparative viewpoint. Finally, discussion of the important topics is presented, putting special emphasis on the future needs and challenges.
Keywords: ADAS, pedestrian detection, on-board vision, survey
|