Home | [181–190] << 191 192 193 194 195 196 197 198 199 200 >> [201–210] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Danna Xue; Javier Vazquez; Luis Herranz; Yang Zhang; Michael S Brown | ||||
Title | Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring | Type | Journal Article | ||
Year | 2023 | Publication | Computer Graphics Forum | Abbreviated Journal | CGF |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Achieving visually consistent colors across multiple images is important when images are used in photo albums, websites, and brochures. Unfortunately, only a handful of methods address multi-image color consistency compared to one-to-one color transfer techniques. Furthermore, existing methods do not incorporate high-level features that can assist graphic designers in their work. To address these limitations, we introduce a framework that builds upon a previous palette-based color consistency method and incorporates three high-level features: white balance, saliency, and color naming. We show how these features overcome the limitations of the prior multi-consistency workflow and showcase the user-friendly nature of our framework. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC; MACO | Approved | no | ||
Call Number | Admin @ si @ XVH2023 | Serial | 3883 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Josep Llados; Joana Maria Pujadas-Mora | ||||
Title | Browsing of the Social Network of the Past: Information Extraction from Population Manuscript Images | Type | Book Chapter | ||
Year | 2020 | Publication | Handwritten Historical Document Analysis, Recognition, and Retrieval – State of the Art and Future Trends | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | World Scientific | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-981-120-323-7 | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.140; 600.121 | Approved | no | ||
Call Number | Admin @ si @ FLP2020 | Serial | 3350 | ||
Permanent link to this record | |||||
Author | Patrick Brandao; O. Zisimopoulos; E. Mazomenos; G. Ciutib; Jorge Bernal; M. Visentini-Scarzanell; A. Menciassi; P. Dario; A. Koulaouzidis; A. Arezzo; D.J. Hawkes; D. Stoyanov | ||||
Title | Towards a computed-aided diagnosis system in colonoscopy: Automatic polyp segmentation using convolution neural networks | Type | Journal | ||
Year | 2018 | Publication | Journal of Medical Robotics Research | Abbreviated Journal | JMRR |
Volume | 3 | Issue | 2 | Pages | |
Keywords | convolutional neural networks; colonoscopy; computer aided diagnosis | ||||
Abstract | Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC) and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), ne-tune them and study their capabilities for polyp segmentation and detection. We additionally use Shape-from-Shading (SfS) to recover depth and provide a richer representation of the tissue's structure in colonoscopy images. Depth is
incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation IU of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp detection, the top performing models we propose surpass the current state of the art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the rst work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MV; no menciona | Approved | no | ||
Call Number | BZM2018 | Serial | 2976 | ||
Permanent link to this record | |||||
Author | Xiangyang Li; Luis Herranz; Shuqiang Jiang | ||||
Title | Multifaceted Analysis of Fine-Tuning in Deep Model for Visual Recognition | Type | Journal | ||
Year | 2020 | Publication | ACM Transactions on Data Science | Abbreviated Journal | ACM |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In recent years, convolutional neural networks (CNNs) have achieved impressive performance for various visual recognition scenarios. CNNs trained on large labeled datasets can not only obtain significant performance on most challenging benchmarks but also provide powerful representations, which can be used to a wide range of other tasks. However, the requirement of massive amounts of data to train deep neural networks is a major drawback of these models, as the data available is usually limited or imbalanced. Fine-tuning (FT) is an effective way to transfer knowledge learned in a source dataset to a target task. In this paper, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition. These factors include parameters for the retraining procedure (e.g., the initial learning rate of fine-tuning), the distribution of the source and target data (e.g., the number of categories in the source dataset, the distance between the source and target datasets) and so on. We quantitatively and qualitatively analyze these factors, evaluate their influence, and present many empirical observations. The results reveal insights into what fine-tuning changes CNN parameters and provide useful and evidence-backed intuitions about how to implement fine-tuning for computer vision tasks. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.141; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LHJ2020 | Serial | 3423 | ||
Permanent link to this record | |||||
Author | Yaxing Wang; Abel Gonzalez-Garcia; Joost Van de Weijer; Luis Herranz | ||||
Title | SDIT: Scalable and Diverse Cross-domain Image Translation | Type | Conference Article | ||
Year | 2019 | Publication | 27th ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 1267–1276 | ||
Keywords | |||||
Abstract | Recently, image-to-image translation research has witnessed remarkable progress. Although current approaches successfully generate diverse outputs or perform scalable image transfer, these properties have not been combined into a single method. To address this limitation, we propose SDIT: Scalable and Diverse image-to-image translation. These properties are combined into a single generator. The diversity is determined by a latent variable which is randomly sampled from a normal distribution. The scalability is obtained by conditioning the network on the domain attributes. Additionally, we also exploit an attention mechanism that permits the generator to focus on the domain-specific attribute. We empirically demonstrate the performance of the proposed method on face mapping and other datasets beyond faces. | ||||
Address | Nice; Francia; October 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM-MM | ||
Notes | LAMP; 600.106; 600.109; 600.141; 600.120 | Approved | no | ||
Call Number | Admin @ si @ WGW2019 | Serial | 3363 | ||
Permanent link to this record | |||||
Author | Raul Gomez; Yahui Liu; Marco de Nadai; Dimosthenis Karatzas; Bruno Lepri; Nicu Sebe | ||||
Title | Retrieval Guided Unsupervised Multi-domain Image to Image Translation | Type | Conference Article | ||
Year | 2020 | Publication | 28th ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ GLN2020 | Serial | 3497 | ||
Permanent link to this record | |||||
Author | Javier M. Olaso; Alain Vazquez; Leila Ben Letaifa; Mikel de Velasco; Aymen Mtibaa; Mohamed Amine Hmani; Dijana Petrovska-Delacretaz; Gerard Chollet; Cesar Montenegro; Asier Lopez-Zorrilla; Raquel Justo; Roberto Santana; Jofre Tenorio-Laranga; Eduardo Gonzalez-Fraile; Begoña Fernandez-Ruanova; Gennaro Cordasco; Anna Esposito; Kristin Beck Gjellesvik; Anna Torp Johansen; Maria Stylianou Kornes; Colin Pickard; Cornelius Glackin; Gary Cahalane; Pau Buch; Cristina Palmero; Sergio Escalera; Olga Gordeeva; Olivier Deroo; Anaïs Fernandez; Daria Kyslitska; Jose Antonio Lozano; Maria Ines Torres; Stephan Schlogl | ||||
Title | The EMPATHIC Virtual Coach: a demo | Type | Conference Article | ||
Year | 2021 | Publication | 23rd ACM International Conference on Multimodal Interaction | Abbreviated Journal | |
Volume | Issue | Pages | 848-851 | ||
Keywords | |||||
Abstract | The main objective of the EMPATHIC project has been the design and development of a virtual coach to engage the healthy-senior user and to enhance well-being through awareness of personal status. The EMPATHIC approach addresses this objective through multimodal interactions supported by the GROW coaching model. The paper summarizes the main components of the EMPATHIC Virtual Coach (EMPATHIC-VC) and introduces a demonstration of the coaching sessions in selected scenarios. | ||||
Address | Virtual; October 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICMI | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ OVB2021 | Serial | 3644 | ||
Permanent link to this record | |||||
Author | Wenjuan Gong; Yue Zhang; Wei Wang; Peng Cheng; Jordi Gonzalez | ||||
Title | Meta-MMFNet: Meta-learning-based Multi-model Fusion Network for Micro-expression Recognition | Type | Journal Article | ||
Year | 2023 | Publication | ACM Transactions on Multimedia Computing, Communications, and Applications | Abbreviated Journal | TMCCA |
Volume | 20 | Issue | 2 | Pages | 1–20 |
Keywords | |||||
Abstract | Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning-based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ GZW2023 | Serial | 3862 | ||
Permanent link to this record | |||||
Author | Cristina Palmero; Oleg V Komogortsev; Sergio Escalera; Sachin S Talathi | ||||
Title | Multi-Rate Sensor Fusion for Unconstrained Near-Eye Gaze Estimation | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 2023 Symposium on Eye Tracking Research and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 1-8 | ||
Keywords | |||||
Abstract | The power requirements of video-oculography systems can be prohibitive for high-speed operation on portable devices. Recently, low-power alternatives such as photosensors have been evaluated, providing gaze estimates at high frequency with a trade-off in accuracy and robustness. Potentially, an approach combining slow/high-fidelity and fast/low-fidelity sensors should be able to exploit their complementarity to track fast eye motion accurately and robustly. To foster research on this topic, we introduce OpenSFEDS, a near-eye gaze estimation dataset containing approximately 2M synthetic camera-photosensor image pairs sampled at 500 Hz under varied appearance and camera position. We also formulate the task of sensor fusion for gaze estimation, proposing a deep learning framework consisting in appearance-based encoding and temporal eye-state dynamics. We evaluate several single- and multi-rate fusion baselines on OpenSFEDS, achieving 8.7% error decrease when tracking fast eye movements with a multi-rate approach vs. a gaze forecasting approach operating with a low-speed sensor alone. | ||||
Address | Tubingen; Germany; May 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ETRA | ||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ PKE2023 | Serial | 3923 | ||
Permanent link to this record | |||||
Author | Mohamed Ali Souibgui; Pau Torras; Jialuo Chen; Alicia Fornes | ||||
Title | An Evaluation of Handwritten Text Recognition Methods for Historical Ciphered Manuscripts | Type | Conference Article | ||
Year | 2023 | Publication | 7th International Workshop on Historical Document Imaging and Processing | Abbreviated Journal | |
Volume | Issue | Pages | 7-12 | ||
Keywords | |||||
Abstract | This paper investigates the effectiveness of different deep learning HTR families, including LSTM, Seq2Seq, and transformer-based approaches with self-supervised pretraining, in recognizing ciphered manuscripts from different historical periods and cultures. The goal is to identify the most suitable method or training techniques for recognizing ciphered manuscripts and to provide insights into the challenges and opportunities in this field of research. We evaluate the performance of these models on several datasets of ciphered manuscripts and discuss their results. This study contributes to the development of more accurate and efficient methods for recognizing historical manuscripts for the preservation and dissemination of our cultural heritage. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | HIP | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ STC2023 | Serial | 3849 | ||
Permanent link to this record | |||||
Author | Christian Keilstrup Ingwersen; Artur Xarles; Albert Clapes; Meysam Madadi; Janus Nortoft Jensen; Morten Rieger Hannemose; Anders Bjorholm Dahl; Sergio Escalera | ||||
Title | Video-based Skill Assessment for Golf: Estimating Golf Handicap | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 6th International Workshop on Multimedia Content Analysis in Sports | Abbreviated Journal | |
Volume | Issue | Pages | 31-39 | ||
Keywords | |||||
Abstract | Automated skill assessment in sports using video-based analysis holds great potential for revolutionizing coaching methodologies. This paper focuses on the problem of skill determination in golfers by leveraging deep learning models applied to a large database of video recordings of golf swings. We investigate different regression, ranking and classification based methods and compare to a simple baseline approach. The performance is evaluated using mean squared error (MSE) as well as computing the percentages of correctly ranked pairs based on the Kendall correlation. Our results demonstrate an improvement over the baseline, with a 35% lower mean squared error and 68% correctly ranked pairs. However, achieving fine-grained skill assessment remains challenging. This work contributes to the development of AI-driven coaching systems and advances the understanding of video-based skill determination in the context of golf. | ||||
Address | Otawa; Canada; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MMSports | ||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ KXC2023 | Serial | 3929 | ||
Permanent link to this record | |||||
Author | Artur Xarles; Sergio Escalera; Thomas B. Moeslund; Albert Clapes | ||||
Title | ASTRA: An Action Spotting TRAnsformer for Soccer Videos | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 6th International Workshop on Multimedia Content Analysis in Sports | Abbreviated Journal | |
Volume | Issue | Pages | 93–102 | ||
Keywords | |||||
Abstract | In this paper, we introduce ASTRA, a Transformer-based model designed for the task of Action Spotting in soccer matches. ASTRA addresses several challenges inherent in the task and dataset, including the requirement for precise action localization, the presence of a long-tail data distribution, non-visibility in certain actions, and inherent label noise. To do so, ASTRA incorporates (a) a Transformer encoder-decoder architecture to achieve the desired output temporal resolution and to produce precise predictions, (b) a balanced mixup strategy to handle the long-tail distribution of the data, (c) an uncertainty-aware displacement head to capture the label variability, and (d) input audio signal to enhance detection of non-visible actions. Results demonstrate the effectiveness of ASTRA, achieving a tight Average-mAP of 66.82 on the test set. Moreover, in the SoccerNet 2023 Action Spotting challenge, we secure the 3rd position with an Average-mAP of 70.21 on the challenge set. | ||||
Address | Otawa; Canada; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MMSports | ||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ XEM2023 | Serial | 3970 | ||
Permanent link to this record | |||||
Author | David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville | ||||
Title | A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images | Type | Journal Article | ||
Year | 2017 | Publication | Journal of Healthcare Engineering | Abbreviated Journal | JHCE |
Volume | Issue | Pages | 2040-2295 | ||
Keywords | Colonoscopy images; Deep Learning; Semantic Segmentation | ||||
Abstract | Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 | Approved | no | ||
Call Number | VBS2017b | Serial | 2940 | ||
Permanent link to this record | |||||
Author | Marta Diez-Ferrer; Arturo Morales; Rosa Lopez Lisbona; Noelia Cubero; Cristian Tebe; Susana Padrones; Samantha Aso; Jordi Dorca; Debora Gil; Antoni Rosell | ||||
Title | Ultrathin Bronchoscopy with and without Virtual Bronchoscopic Navigation: Influence of Segmentation on Diagnostic Yield | Type | Journal Article | ||
Year | 2019 | Publication | Respiration | Abbreviated Journal | RES |
Volume | 97 | Issue | 3 | Pages | 252-258 |
Keywords | Lung cancer; Peripheral lung lesion; Diagnosis; Bronchoscopy; Ultrathin bronchoscopy; Virtual bronchoscopic navigation | ||||
Abstract | Background: Bronchoscopy is a safe technique for diagnosing peripheral pulmonary lesions (PPLs), and virtual bronchoscopic navigation (VBN) helps guide the bronchoscope to PPLs. Objectives: We aimed to compare the diagnostic yield of VBN-guided and unguided ultrathin bronchoscopy (UTB) and explore clinical and technical factors associated with better results. We developed a diagnostic algorithm for deciding whether to use VBN to reach PPLs or choose an alternative diagnostic approach. Methods: We compared diagnostic yield between VBN-UTB (prospective cases) and unguided UTB (historical controls) and analyzed the VBN-UTB subgroup to identify clinical and technical variables that could predict the success of VBN-UTB. Results: Fifty-five cases and 110 controls were included. The overall diagnostic yield did not differ between the VBN-guided and unguided arms (47 and 40%, respectively; p = 0.354). Although the yield was slightly higher for PPLs ≤20 mm in the VBN-UTB arm, the difference was not significant (p = 0.069). No other clinical characteristics were associated with a higher yield in a subgroup analysis, but an 85% diagnostic yield was observed when segmentation was optimal and the PPL was endobronchial (vs. 30% when segmentation was suboptimal and 20% when segmentation was optimal but the PPL was extrabronchial). Conclusions: VBN-guided UTB is not superior to unguided UTB. A greater impact of VBN-guided over unguided UTB is highly dependent on both segmentation quality and an endobronchial location of the PPL. Segmentation quality should be considered before starting a procedure, when an alternative technique that may improve yield can be chosen, saving time and resources. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.145; 600.139 | Approved | no | ||
Call Number | Admin @ si @ DML2019 | Serial | 3134 | ||
Permanent link to this record | |||||
Author | Sonia Baeza; R.Domingo; M.Salcedo; G.Moragas; J.Deportos; I.Garcia Olive; Carles Sanchez; Debora Gil; Antoni Rosell | ||||
Title | Artificial Intelligence to Optimize Pulmonary Embolism Diagnosis During Covid-19 Pandemic by Perfusion SPECT/CT, a Pilot Study | Type | Journal Article | ||
Year | 2021 | Publication | American Journal of Respiratory and Critical Care Medicine | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.145 | Approved | no | ||
Call Number | Admin @ si @ BDS2021 | Serial | 3591 | ||
Permanent link to this record |