Simone Balocco, Francesco Ciompi, Juan Rigla, Xavier Carrillo, J. Mauri, & Petia Radeva. (2019). Assessment of intracoronary stent location and extension in intravascular ultrasound sequences. MEDPHYS - Medical Physics, 46(2), 484–493.
Abstract: PURPOSE:
An intraluminal coronary stent is a metal scaffold deployed in a stenotic artery during percutaneous coronary intervention (PCI). In order to have an effective deployment, a stent should be optimally placed with regard to anatomical structures such as bifurcations and stenoses. Intravascular ultrasound (IVUS) is a catheter-based imaging technique generally used for PCI guiding and assessing the correct placement of the stent. A novel approach that automatically detects the boundaries and the position of the stent along the IVUS pullback is presented. Such a technique aims at optimizing the stent deployment.
METHODS:
The method requires the identification of the stable frames of the sequence and the reliable detection of stent struts. Using these data, a measure of likelihood for a frame to contain a stent is computed. Then, a robust binary representation of the presence of the stent in the pullback is obtained applying an iterative and multiscale quantization of the signal to symbols using the Symbolic Aggregate approXimation algorithm.
RESULTS:
The technique was extensively validated on a set of 103 IVUS of sequences of in vivo coronary arteries containing metallic and bioabsorbable stents acquired through an international multicentric collaboration across five clinical centers. The method was able to detect the stent position with an overall F-measure of 86.4%, a Jaccard index score of 75% and a mean distance of 2.5 mm from manually annotated stent boundaries, and in bioabsorbable stents with an overall F-measure of 88.6%, a Jaccard score of 77.7 and a mean distance of 1.5 mm from manually annotated stent boundaries. Additionally, a map indicating the distance between the lumen and the stent along the pullback is created in order to show the angular sectors of the sequence in which the malapposition is present.
CONCLUSIONS:
Results obtained comparing the automatic results vs the manual annotation of two observers shows that the method approaches the interobserver variability. Similar performances are obtained on both metallic and bioabsorbable stents, showing the flexibility and robustness of the method.
Keywords: IVUS; malapposition; stent; ultrasound
|
Simone Balocco, Maria Zuluaga, Guillaume Zahnd, Su-Lin Lee, & Stefanie Demirci. (2016). Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting. Elsevier.
|
Simone Balocco, Mauricio Gonzalez, Ricardo Ñancule, Petia Radeva, & Gabriel Thomas. (2018). Calcified Plaque Detection in IVUS Sequences: Preliminary Results Using Convolutional Nets. In International Workshop on Artificial Intelligence and Pattern Recognition (Vol. 11047, pp. 34–42). LNCS.
Abstract: The manual inspection of intravascular ultrasound (IVUS) images to detect clinically relevant patterns is a difficult and laborious task performed routinely by physicians. In this paper, we present a framework based on convolutional nets for the quick selection of IVUS frames containing arterial calcification, a pattern whose detection plays a vital role in the diagnosis of atherosclerosis. Preliminary experiments on a dataset acquired from eighty patients show that convolutional architectures improve detections of a shallow classifier in terms of 𝐹1-measure, precision and recall.
Keywords: Intravascular ultrasound images; Convolutional nets; Deep learning; Medical image analysis
|
Simone Balocco, O. Basset, G. Courbebaisse, E. Boni, Alejandro F. Frangi, P. Tortoli, et al. (2010). Estimation Of Viscoelastic Properties Of Vessel Walls Using a Computational Model and Doppler Ultrasound. PMB - Physics in Medicine and Biology, 55(12), 3557–3575.
Abstract: Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.
|
Simone Balocco, O. Camara, E. Vivas, T. Sola, L. Guimaraens, H. A. van Andel, et al. (2010). Feasibility of Estimating Regional Mechanical Properties of Cerebral Aneurysms In Vivo. MEDPHYS - Medical Physics, 37(4), 1689–1706.
Abstract: PURPOSE:
In this article, the authors studied the feasibility of estimating regional mechanical properties in cerebral aneurysms, integrating information extracted from imaging and physiological data with generic computational models of the arterial wall behavior.
METHODS:
A data assimilation framework was developed to incorporate patient-specific geometries into a given biomechanical model, whereas wall motion estimates were obtained from applying registration techniques to a pair of simulated MR images and guided the mechanical parameter estimation. A simple incompressible linear and isotropic Hookean model coupled with computational fluid-dynamics was employed as a first approximation for computational purposes. Additionally, an automatic clustering technique was developed to reduce the number of parameters to assimilate at the optimization stage and it considerably accelerated the convergence of the simulations. Several in silico experiments were designed to assess the influence of aneurysm geometrical characteristics and the accuracy of wall motion estimates on the mechanical property estimates. Hence, the proposed methodology was applied to six real cerebral aneurysms and tested against a varying number of regions with different elasticity, different mesh discretization, imaging resolution, and registration configurations.
RESULTS:
Several in silico experiments were conducted to investigate the feasibility of the proposed workflow, results found suggesting that the estimation of the mechanical properties was mainly influenced by the image spatial resolution and the chosen registration configuration. According to the in silico experiments, the minimal spatial resolution needed to extract wall pulsation measurements with enough accuracy to guide the proposed data assimilation framework was of 0.1 mm.
CONCLUSIONS:
Current routine imaging modalities do not have such a high spatial resolution and therefore the proposed data assimilation framework cannot currently be used on in vivo data to reliably estimate regional properties in cerebral aneurysms. Besides, it was observed that the incorporation of fluid-structure interaction in a biomechanical model with linear and isotropic material properties did not have a substantial influence in the final results.
|
Simone Zini, Alex Gomez-Villa, Marco Buzzelli, Bartlomiej Twardowski, Andrew D. Bagdanov, & Joost Van de Weijer. (2023). Planckian Jitter: countering the color-crippling effects of color jitter on self-supervised training. In 11th International Conference on Learning Representations.
Abstract: Several recent works on self-supervised learning are trained by mapping different augmentations of the same image to the same feature representation. The data augmentations used are of crucial importance to the quality of learned feature representations. In this paper, we analyze how the color jitter traditionally used in data augmentation negatively impacts the quality of the color features in learned feature representations. To address this problem, we propose a more realistic, physics-based color data augmentation – which we call Planckian Jitter – that creates realistic variations in chromaticity and produces a model robust to illumination changes that can be commonly observed in real life, while maintaining the ability to discriminate image content based on color information. Experiments confirm that such a representation is complementary to the representations learned with the currently-used color jitter augmentation and that a simple concatenation leads to significant performance gains on a wide range of downstream datasets. In addition, we present a color sensitivity analysis that documents the impact of different training methods on model neurons and shows that the performance of the learned features is robust with respect to illuminant variations.
|
Siyang Song, Micol Spitale, Cheng Luo, German Barquero, Cristina Palmero, Sergio Escalera, et al. (2023). REACT2023: The First Multiple Appropriate Facial Reaction Generation Challenge. In Proceedings of the 31st ACM International Conference on Multimedia (9620–9624).
Abstract: The Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions. The goal of the challenge is to provide the first benchmark test set for multi-modal information processing and to foster collaboration among the audio, visual, and audio-visual behaviour analysis and behaviour generation (a.k.a generative AI) communities, to compare the relative merits of the approaches to automatic appropriate facial reaction generation under different spontaneous dyadic interaction conditions. This paper presents: (i) the novelties, contributions and guidelines of the REACT2023 challenge; (ii) the dataset utilized in the challenge; and (iii) the performance of the baseline systems on the two proposed sub-challenges: Offline Multiple Appropriate Facial Reaction Generation and Online Multiple Appropriate Facial Reaction Generation, respectively. The challenge baseline code is publicly available at https://github.com/reactmultimodalchallenge/baseline_react2023.
|
Smriti Joshi, Richard Osuala, Carlos Martin-Isla, Victor M.Campello, Carla Sendra-Balcells, Karim Lekadir, et al. (2022). nn-UNet Training on CycleGAN-Translated Images for Cross-modal Domain Adaptation in Biomedical Imaging. In International MICCAI Brainlesion Workshop (Vol. 12963, 540–551). LNCS.
Abstract: In recent years, deep learning models have considerably advanced the performance of segmentation tasks on Brain Magnetic Resonance Imaging (MRI). However, these models show a considerable performance drop when they are evaluated on unseen data from a different distribution. Since annotation is often a hard and costly task requiring expert supervision, it is necessary to develop ways in which existing models can be adapted to the unseen domains without any additional labelled information. In this work, we explore one such technique which extends the CycleGAN [2] architecture to generate label-preserving data in the target domain. The synthetic target domain data is used to train the nn-UNet [3] framework for the task of multi-label segmentation. The experiments are conducted and evaluated on the dataset [1] provided in the ‘Cross-Modality Domain Adaptation for Medical Image Segmentation’ challenge [23] for segmentation of vestibular schwannoma (VS) tumour and cochlea on contrast enhanced (ceT1) and high resolution (hrT2) MRI scans. In the proposed approach, our model obtains dice scores (DSC) 0.73 and 0.49 for tumour and cochlea respectively on the validation set of the dataset. This indicates the applicability of the proposed technique to real-world problems where data may be obtained by different acquisition protocols as in [1] where hrT2 images are more reliable, safer, and lower-cost alternative to ceT1.
Keywords: Domain adaptation; Vestibular schwannoma (VS); Deep learning; nn-UNet; CycleGAN
|
Sonia Baeza, Debora Gil, Carles Sanchez, Guillermo Torres, Ignasi Garcia Olive, Ignasi Guasch, et al. (2023). Biopsia virtual radiomica para el diagnóstico histológico de nódulos pulmonares – Resultados intermedios del proyecto Radiolung. In SEPAR.
|
Sonia Baeza, Debora Gil, I.Garcia Olive, M.Salcedo, J.Deportos, Carles Sanchez, et al. (2022). A novel intelligent radiomic analysis of perfusion SPECT/CT images to optimize pulmonary embolism diagnosis in COVID-19 patients. EJNMMI-PHYS - EJNMMI Physics, 9(1, Article 84), 1–17.
Abstract: Background: COVID-19 infection, especially in cases with pneumonia, is associated with a high rate of pulmonary embolism (PE). In patients with contraindications for CT pulmonary angiography (CTPA) or non-diagnostic CTPA, perfusion single-photon emission computed tomography/computed tomography (Q-SPECT/CT) is a diagnostic alternative. The goal of this study is to develop a radiomic diagnostic system to detect PE based only on the analysis of Q-SPECT/CT scans.
Methods: This radiomic diagnostic system is based on a local analysis of Q-SPECT/CT volumes that includes both CT and Q-SPECT values for each volume point. We present a combined approach that uses radiomic features extracted from each scan as input into a fully connected classifcation neural network that optimizes a weighted crossentropy loss trained to discriminate between three diferent types of image patterns (pixel sample level): healthy lungs (control group), PE and pneumonia. Four types of models using diferent confguration of parameters were tested.
Results: The proposed radiomic diagnostic system was trained on 20 patients (4,927 sets of samples of three types of image patterns) and validated in a group of 39 patients (4,410 sets of samples of three types of image patterns). In the training group, COVID-19 infection corresponded to 45% of the cases and 51.28% in the test group. In the test group, the best model for determining diferent types of image patterns with PE presented a sensitivity, specifcity, positive predictive value and negative predictive value of 75.1%, 98.2%, 88.9% and 95.4%, respectively. The best model for detecting
pneumonia presented a sensitivity, specifcity, positive predictive value and negative predictive value of 94.1%, 93.6%, 85.2% and 97.6%, respectively. The area under the curve (AUC) was 0.92 for PE and 0.91 for pneumonia. When the results obtained at the pixel sample level are aggregated into regions of interest, the sensitivity of the PE increases to 85%, and all metrics improve for pneumonia.
Conclusion: This radiomic diagnostic system was able to identify the diferent lung imaging patterns and is a frst step toward a comprehensive intelligent radiomic system to optimize the diagnosis of PE by Q-SPECT/CT.
|
Sonia Baeza, R.Domingo, M.Salcedo, G.Moragas, J.Deportos, I.Garcia Olive, et al. (2021). Artificial Intelligence to Optimize Pulmonary Embolism Diagnosis During Covid-19 Pandemic by Perfusion SPECT/CT, a Pilot Study. American Journal of Respiratory and Critical Care Medicine, .
|
Sophie Wuerger, Kaida Xiao, Chenyang Fu, & Dimosthenis Karatzas. (2010). Colour-opponent mechanisms are not affected by age-related chromatic sensitivity changes. OPO - Ophthalmic and Physiological Optics, 30(5), 635–659.
Abstract: The purpose of this study was to assess whether age-related chromatic sensitivity changes are associated with corresponding changes in hue perception in a large sample of colour-normal observers over a wide age range (n = 185; age range: 18-75 years). In these observers we determined both the sensitivity along the protan, deutan and tritan line; and settings for the four unique hues, from which the characteristics of the higher-order colour mechanisms can be derived. We found a significant decrease in chromatic sensitivity due to ageing, in particular along the tritan line. From the unique hue settings we derived the cone weightings associated with the colour mechanisms that are at equilibrium for the four unique hues. We found that the relative cone weightings (w(L) /w(M) and w(L) /w(S)) associated with the unique hues were independent of age. Our results are consistent with previous findings that the unique hues are rather constant with age while chromatic sensitivity declines. They also provide evidence in favour of the hypothesis that higher-order colour mechanisms are equipped with flexible cone weightings, as opposed to fixed weights. The mechanism underlying this compensation is still poorly understood.
|
Sophie Wuerger, Kaida Xiao, Dimitris Mylonas, Q. Huang, Dimosthenis Karatzas, & Galina Paramei. (2012). Blue green color categorization in mandarin english speakers. JOSA A - Journal of the Optical Society of America A, 29(2), A102–A1207.
Abstract: Observers are faster to detect a target among a set of distracters if the targets and distracters come from different color categories. This cross-boundary advantage seems to be limited to the right visual field, which is consistent with the dominance of the left hemisphere for language processing [Gilbert et al., Proc. Natl. Acad. Sci. USA 103, 489 (2006)]. Here we study whether a similar visual field advantage is found in the color identification task in speakers of Mandarin, a language that uses a logographic system. Forty late Mandarin-English bilinguals performed a blue-green color categorization task, in a blocked design, in their first language (L1: Mandarin) or second language (L2: English). Eleven color singletons ranging from blue to green were presented for 160 ms, randomly in the left visual field (LVF) or right visual field (RVF). Color boundary and reaction times (RTs) at the color boundary were estimated in L1 and L2, for both visual fields. We found that the color boundary did not differ between the languages; RTs at the color boundary, however, were on average more than 100 ms shorter in the English compared to the Mandarin sessions, but only when the stimuli were presented in the RVF. The finding may be explained by the script nature of the two languages: Mandarin logographic characters are analyzed visuospatially in the right hemisphere, which conceivably facilitates identification of color presented to the LVF.
|
Souhail Bakkali, Sanket Biswas, Zuheng Ming, Mickael Coustaty, Marçal Rusiñol, Oriol Ramos Terrades, et al. (2023). TransferDoc: A Self-Supervised Transferable Document Representation Learning Model Unifying Vision and Language.
Abstract: The field of visual document understanding has witnessed a rapid growth in emerging challenges and powerful multi-modal strategies. However, they rely on an extensive amount of document data to learn their pretext objectives in a ``pre-train-then-fine-tune'' paradigm and thus, suffer a significant performance drop in real-world online industrial settings. One major reason is the over-reliance on OCR engines to extract local positional information within a document page. Therefore, this hinders the model's generalizability, flexibility and robustness due to the lack of capturing global information within a document image. We introduce TransferDoc, a cross-modal transformer-based architecture pre-trained in a self-supervised fashion using three novel pretext objectives. TransferDoc learns richer semantic concepts by unifying language and visual representations, which enables the production of more transferable models. Besides, two novel downstream tasks have been introduced for a ``closer-to-real'' industrial evaluation scenario where TransferDoc outperforms other state-of-the-art approaches.
|
Souhail Bakkali, Zuheng Ming, Mickael Coustaty, Marçal Rusiñol, & Oriol Ramos Terrades. (2023). VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification. PR - Pattern Recognition, 139, 109419.
Abstract: Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream approach. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a common representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the common feature representation space}. Extensive experiments on public document classification datasets demonstrate the effectiveness and the generalization capacity of our model on both low-scale and large-scale datasets.
|