toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Mohammad N. S. Jahromi; Morten Bojesen Bonderup; Maryam Asadi-Aghbolaghi; Egils Avots; Kamal Nasrollahi; Sergio Escalera; Shohreh Kasaei; Thomas B. Moeslund; Gholamreza Anbarjafari edit  doi
openurl 
  Title Automatic Access Control Based on Face and Hand Biometrics in a Non-cooperative Context Type Conference Article
  Year 2018 Publication IEEE Winter Applications of Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 28-36  
  Keywords IEEE Winter Applications of Computer Vision Workshops  
  Abstract Automatic access control systems (ACS) based on the human biometrics or physical tokens are widely employed in public and private areas. Yet these systems, in their conventional forms, are restricted to active interaction from the users. In scenarios where users are not cooperating with the system, these systems are challenged. Failure in cooperation with the biometric systems might be intentional or because the users are incapable of handling the interaction procedure with the biometric system or simply forget to cooperate with it, due to for example, illness like dementia. This work introduces a challenging bimodal database, including face and hand information of the users when they approach a door to open it by its handle in a noncooperative context. We have defined two (an easy and a challenging) protocols on how to use the database. We have reported results on many baseline methods, including deep learning techniques as well as conventional methods on the database. The obtained results show the merit of the proposed database and the challenging nature of access control with non-cooperative users.  
  Address Lake Tahoe; USA; March 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACVW  
  Notes HUPBA; 602.133 Approved no  
  Call Number Admin @ si @ JBA2018 Serial 3121  
Permanent link to this record
 

 
Author Jianzhy Guo; Zhen Lei; Jun Wan; Egils Avots; Noushin Hajarolasvadi; Boris Knyazev; Artem Kuharenko; Julio C. S. Jacques Junior; Xavier Baro; Hasan Demirel; Sergio Escalera; Juri Allik; Gholamreza Anbarjafari edit  doi
openurl 
  Title Dominant and Complementary Emotion Recognition from Still Images of Faces Type Journal Article
  Year 2018 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume 6 Issue Pages 26391 - 26403  
  Keywords  
  Abstract Emotion recognition has a key role in affective computing. Recently, fine-grained emotion analysis, such as compound facial expression of emotions, has attracted high interest of researchers working on affective computing. A compound facial emotion includes dominant and complementary emotions (e.g., happily-disgusted and sadly-fearful), which is more detailed than the seven classical facial emotions (e.g., happy, disgust, and so on). Current studies on compound emotions are limited to use data sets with limited number of categories and unbalanced data distributions, with labels obtained automatically by machine learning-based algorithms which could lead to inaccuracies. To address these problems, we released the iCV-MEFED data set, which includes 50 classes of compound emotions and labels assessed by psychologists. The task is challenging due to high similarities of compound facial emotions from different categories. In addition, we have organized a challenge based on the proposed iCV-MEFED data set, held at FG workshop 2017. In this paper, we analyze the top three winner methods and perform further detailed experiments on the proposed data set. Experiments indicate that pairs of compound emotion (e.g., surprisingly-happy vs happily-surprised) are more difficult to be recognized if compared with the seven basic emotions. However, we hope the proposed data set can help to pave the way for further research on compound facial emotion recognition.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ GLW2018 Serial 3122  
Permanent link to this record
 

 
Author Pichao Wang; Wanqing Li; Philip Ogunbona; Jun Wan; Sergio Escalera edit   pdf
url  openurl
  Title RGB-D-based Human Motion Recognition with Deep Learning: A Survey Type Journal Article
  Year 2018 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 171 Issue Pages 118-139  
  Keywords Human motion recognition; RGB-D data; Deep learning; Survey  
  Abstract Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ WLO2018 Serial 3123  
Permanent link to this record
 

 
Author Jelena Gorbova; Egils Avots; Iiris Lusi; Mark Fishel; Sergio Escalera; Gholamreza Anbarjafari edit  doi
openurl 
  Title Integrating Vision and Language for First Impression Personality Analysis Type Journal Article
  Year 2018 Publication IEEE Multimedia Abbreviated Journal MULTIMEDIA  
  Volume 25 Issue 2 Pages 24 - 33  
  Keywords  
  Abstract The authors present a novel methodology for analyzing integrated audiovisual signals and language to assess a persons personality. An evaluation of their proposed multimodal method using a job candidate screening system that predicted five personality traits from a short video demonstrates the methods effectiveness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; 602.133 Approved no  
  Call Number Admin @ si @ GAL2018 Serial 3124  
Permanent link to this record
 

 
Author Albert Clapes; Alex Pardo; Oriol Pujol; Sergio Escalera edit   pdf
url  openurl
  Title Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly Type Journal Article
  Year 2018 Publication Machine Vision and Applications Abbreviated Journal MVAP  
  Volume 29 Issue 5 Pages 765–788  
  Keywords Multimodal activity detection; Computer vision; Inertial sensors; Dense trajectories; Dynamic time warping; Assistive technology  
  Abstract We present a vision-inertial system which combines two RGB-Depth devices together with a wearable inertial movement unit in order to detect activities of the daily living. From multi-view videos, we extract dense trajectories enriched with a histogram of normals description computed from the depth cue and bag them into multi-view codebooks. During the later classification step a multi-class support vector machine with a RBF- 2 kernel combines the descriptions at kernel level. In order to perform action detection from the videos, a sliding window approach is utilized. On the other hand, we extract accelerations, rotation angles, and jerk features from the inertial data collected by the wearable placed on the user’s dominant wrist. During gesture spotting, a dynamic time warping is applied and the aligning costs to a set of pre-selected gesture sub-classes are thresholded to determine possible detections. The outputs of the two modules are combined in a late-fusion fashion. The system is validated in a real-case scenario with elderly from an elder home. Learning-based fusion results improve the ones from the single modalities, demonstrating the success of such multimodal approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ CPP2018 Serial 3125  
Permanent link to this record
 

 
Author Jun Wan; Sergio Escalera; Francisco Perales; Josef Kittler edit  url
openurl 
  Title Articulated Motion and Deformable Objects Type Journal Article
  Year 2018 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 79 Issue Pages 55-64  
  Keywords  
  Abstract This guest editorial introduces the twenty two papers accepted for this Special Issue on Articulated Motion and Deformable Objects (AMDO). They are grouped into four main categories within the field of AMDO: human motion analysis (action/gesture), human pose estimation, deformable shape segmentation, and face analysis. For each of the four topics, a survey of the recent developments in the field is presented. The accepted papers are briefly introduced in the context of this survey. They contribute novel methods, algorithms with improved performance as measured on benchmarking datasets, as well as two new datasets for hand action detection and human posture analysis. The special issue should be of high relevance to the reader interested in AMDO recognition and promote future research directions in the field.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ WEP2018 Serial 3126  
Permanent link to this record
 

 
Author Cesar de Souza edit  openurl
  Title Action Recognition in Videos: Data-efficient approaches for supervised learning of human action classification models for video Type Book Whole
  Year 2018 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this dissertation, we explore different ways to perform human action recognition in video clips. We focus on data efficiency, proposing new approaches that alleviate the need for laborious and time-consuming manual data annotation. In the first part of this dissertation, we start by analyzing previous state-of-the-art models, comparing their differences and similarities in order to pinpoint where their real strengths come from. Leveraging this information, we then proceed to boost the classification accuracy of shallow models to levels that rival deep neural networks. We introduce hybrid video classification architectures based on carefully designed unsupervised representations of handcrafted spatiotemporal features classified by supervised deep networks. We show in our experiments that our hybrid model combine the best of both worlds: it is data efficient (trained on 150 to 10,000 short clips) and yet improved significantly on the state of the art, including deep models trained on millions of manually labeled images and videos. In the second part of this research, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. It contains a total of 39,982 videos, with more than 1,000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We then introduce deep multi-task representation learning architectures to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF-101 and HMDB-51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, outperforming fine-tuning state-of-the-art unsupervised generative models of videos.  
  Address April 2018  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Antonio Lopez;Naila Murray  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ Sou2018 Serial 3127  
Permanent link to this record
 

 
Author Joan Serrat; Felipe Lumbreras; Idoia Ruiz edit   pdf
url  openurl
  Title Learning to measure for preshipment garment sizing Type Journal Article
  Year 2018 Publication Measurement Abbreviated Journal MEASURE  
  Volume 130 Issue Pages 327-339  
  Keywords Apparel; Computer vision; Structured prediction; Regression  
  Abstract Clothing is still manually manufactured for the most part nowadays, resulting in discrepancies between nominal and real dimensions, and potentially ill-fitting garments. Hence, it is common in the apparel industry to manually perform measures at preshipment time. We present an automatic method to obtain such measures from a single image of a garment that speeds up this task. It is generic and extensible in the sense that it does not depend explicitly on the garment shape or type. Instead, it learns through a probabilistic graphical model to identify the different contour parts. Subsequently, a set of Lasso regressors, one per desired measure, can predict the actual values of the measures. We present results on a dataset of 130 images of jackets and 98 of pants, of varying sizes and styles, obtaining 1.17 and 1.22 cm of mean absolute error, respectively.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; MSIAU; 600.122; 600.118 Approved no  
  Call Number Admin @ si @ SLR2018 Serial 3128  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; S. Sancho; Ramon Baldrich; Robert Benavente; Maria Vanrell edit   pdf
url  openurl
  Title Color-based data augmentation for Reflectance Estimation Type Conference Article
  Year 2018 Publication 26th Color Imaging Conference Abbreviated Journal  
  Volume Issue Pages 284-289  
  Keywords  
  Abstract Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.  
  Address Vancouver; November 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CIC  
  Notes CIC Approved no  
  Call Number Admin @ si @ SSB2018a Serial 3129  
Permanent link to this record
 

 
Author Yaxing Wang; Chenshen Wu; Luis Herranz; Joost Van de Weijer; Abel Gonzalez-Garcia; Bogdan Raducanu edit   pdf
url  openurl
  Title Transferring GANs: generating images from limited data Type Conference Article
  Year 2018 Publication 15th European Conference on Computer Vision Abbreviated Journal  
  Volume 11210 Issue Pages 220-236  
  Keywords Generative adversarial networks; Transfer learning; Domain adaptation; Image generation  
  Abstract ransferring knowledge of pre-trained networks to new domains by means of fine-tuning is a widely used practice for applications based on discriminative models. To the best of our knowledge this practice has not been studied within the context of generative deep networks. Therefore, we study domain adaptation applied to image generation with generative adversarial networks. We evaluate several aspects of domain adaptation, including the impact of target domain size, the relative distance between source and target domain, and the initialization of conditional GANs. Our results show that using knowledge from pre-trained networks can shorten the convergence time and can significantly improve the quality of the generated images, especially when target data is limited. We show that these conclusions can also be drawn for conditional GANs even when the pre-trained model was trained without conditioning. Our results also suggest that density is more important than diversity and a dataset with one or few densely sampled classes is a better source model than more diverse datasets such as ImageNet or Places.  
  Address Munich; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCV  
  Notes LAMP; 600.109; 600.106; 600.120 Approved no  
  Call Number Admin @ si @ WWH2018a Serial 3130  
Permanent link to this record
 

 
Author Yaxing Wang; Joost Van de Weijer; Luis Herranz edit   pdf
doi  openurl
  Title Mix and match networks: encoder-decoder alignment for zero-pair image translation Type Conference Article
  Year 2018 Publication 31st IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 5467 - 5476  
  Keywords  
  Abstract We address the problem of image translation between domains or modalities for which no direct paired data is available (i.e. zero-pair translation). We propose mix and match networks, based on multiple encoders and decoders aligned in such a way that other encoder-decoder pairs can be composed at test time to perform unseen image translation tasks between domains or modalities for which explicit paired samples were not seen during training. We study the impact of autoencoders, side information and losses in improving the alignment and transferability of trained pairwise translation models to unseen translations. We show our approach is scalable and can perform colorization and style transfer between unseen combinations of domains. We evaluate our system in a challenging cross-modal setting where semantic segmentation is estimated from depth images, without explicit access to any depth-semantic segmentation training pairs. Our model outperforms baselines based on pix2pix and CycleGAN models.  
  Address Salt Lake City; USA; June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes LAMP; 600.109; 600.106; 600.120 Approved no  
  Call Number Admin @ si @ WWH2018b Serial 3131  
Permanent link to this record
 

 
Author Esmitt Ramirez; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil edit   pdf
url  openurl
  Title BronchoX: bronchoscopy exploration software for biopsy intervention planning Type Journal
  Year 2018 Publication Healthcare Technology Letters Abbreviated Journal HTL  
  Volume 5 Issue 5 Pages 177–182  
  Keywords  
  Abstract Virtual bronchoscopy (VB) is a non-invasive exploration tool for intervention planning and navigation of possible pulmonary lesions (PLs). A VB software involves the location of a PL and the calculation of a route, starting from the trachea, to reach it. The selection of a VB software might be a complex process, and there is no consensus in the community of medical software developers in which is the best-suited system to use or framework to choose. The authors present Bronchoscopy Exploration (BronchoX), a VB software to plan biopsy interventions that generate physician-readable instructions to reach the PLs. The authors’ solution is open source, multiplatform, and extensible for future functionalities, designed by their multidisciplinary research and development group. BronchoX is a compound of different algorithms for segmentation, visualisation, and navigation of the respiratory tract. Performed results are a focus on the test the effectiveness of their proposal as an exploration software, also to measure its accuracy as a guiding system to reach PLs. Then, 40 different virtual planning paths were created to guide physicians until distal bronchioles. These results provide a functional software for BronchoX and demonstrate how following simple instructions is possible to reach distal lesions from the trachea.  
  Address  
  Corporate Author rank (SJR) Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.096; 600.075; 601.323; 601.337; 600.145 Approved no  
  Call Number Admin @ si @ RSB2018a Serial 3132  
Permanent link to this record
 

 
Author Debora Gil; Ruth Aris; Agnes Borras; Esmitt Ramirez; Rafael Sebastian; Mariano Vazquez edit   pdf
doi  openurl
  Title Influence of fiber connectivity in simulations of cardiac biomechanics Type Journal Article
  Year 2019 Publication International Journal of Computer Assisted Radiology and Surgery Abbreviated Journal IJCAR  
  Volume 14 Issue 1 Pages 63–72  
  Keywords Cardiac electromechanical simulations; Diffusion tensor imaging; Fiber connectivity  
  Abstract PURPOSE:
Personalized computational simulations of the heart could open up new improved approaches to diagnosis and surgery assistance systems. While it is fully recognized that myocardial fiber orientation is central for the construction of realistic computational models of cardiac electromechanics, the role of its overall architecture and connectivity remains unclear. Morphological studies show that the distribution of cardiac muscular fibers at the basal ring connects epicardium and endocardium. However, computational models simplify their distribution and disregard the basal loop. This work explores the influence in computational simulations of fiber distribution at different short-axis cuts.

METHODS:
We have used a highly parallelized computational solver to test different fiber models of ventricular muscular connectivity. We have considered two rule-based mathematical models and an own-designed method preserving basal connectivity as observed in experimental data. Simulated cardiac functional scores (rotation, torsion and longitudinal shortening) were compared to experimental healthy ranges using generalized models (rotation) and Mahalanobis distances (shortening, torsion).

RESULTS:
The probability of rotation was significantly lower for ruled-based models [95% CI (0.13, 0.20)] in comparison with experimental data [95% CI (0.23, 0.31)]. The Mahalanobis distance for experimental data was in the edge of the region enclosing 99% of the healthy population.

CONCLUSIONS:
Cardiac electromechanical simulations of the heart with fibers extracted from experimental data produce functional scores closer to healthy ranges than rule-based models disregarding architecture connectivity.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.096; 601.323; 600.139; 600.145 Approved no  
  Call Number Admin @ si @ GAB2019a Serial 3133  
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Arturo Morales; Rosa Lopez Lisbona; Noelia Cubero; Cristian Tebe; Susana Padrones; Samantha Aso; Jordi Dorca; Debora Gil; Antoni Rosell edit  url
openurl 
  Title Ultrathin Bronchoscopy with and without Virtual Bronchoscopic Navigation: Influence of Segmentation on Diagnostic Yield Type Journal Article
  Year 2019 Publication Respiration Abbreviated Journal RES  
  Volume 97 Issue 3 Pages 252-258  
  Keywords Lung cancer; Peripheral lung lesion; Diagnosis; Bronchoscopy; Ultrathin bronchoscopy; Virtual bronchoscopic navigation  
  Abstract Background: Bronchoscopy is a safe technique for diagnosing peripheral pulmonary lesions (PPLs), and virtual bronchoscopic navigation (VBN) helps guide the bronchoscope to PPLs. Objectives: We aimed to compare the diagnostic yield of VBN-guided and unguided ultrathin bronchoscopy (UTB) and explore clinical and technical factors associated with better results. We developed a diagnostic algorithm for deciding whether to use VBN to reach PPLs or choose an alternative diagnostic approach. Methods: We compared diagnostic yield between VBN-UTB (prospective cases) and unguided UTB (historical controls) and analyzed the VBN-UTB subgroup to identify clinical and technical variables that could predict the success of VBN-UTB. Results: Fifty-five cases and 110 controls were included. The overall diagnostic yield did not differ between the VBN-guided and unguided arms (47 and 40%, respectively; p = 0.354). Although the yield was slightly higher for PPLs ≤20 mm in the VBN-UTB arm, the difference was not significant (p = 0.069). No other clinical characteristics were associated with a higher yield in a subgroup analysis, but an 85% diagnostic yield was observed when segmentation was optimal and the PPL was endobronchial (vs. 30% when segmentation was suboptimal and 20% when segmentation was optimal but the PPL was extrabronchial). Conclusions: VBN-guided UTB is not superior to unguided UTB. A greater impact of VBN-guided over unguided UTB is highly dependent on both segmentation quality and an endobronchial location of the PPL. Segmentation quality should be considered before starting a procedure, when an alternative technique that may improve yield can be chosen, saving time and resources.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.145; 600.139 Approved no  
  Call Number Admin @ si @ DML2019 Serial 3134  
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Debora Gil; Cristian Tebe; Carles Sanchez edit   pdf
doi  openurl
  Title Positive Airway Pressure to Enhance Computed Tomography Imaging for Airway Segmentation for Virtual Bronchoscopic Navigation Type Journal Article
  Year 2018 Publication Respiration Abbreviated Journal RES  
  Volume 96 Issue 6 Pages 525-534  
  Keywords Multidetector computed tomography; Bronchoscopy; Continuous positive airway pressure; Image enhancement; Virtual bronchoscopic navigation  
  Abstract Abstract
RATIONALE:
Virtual bronchoscopic navigation (VBN) guidance to peripheral pulmonary lesions is often limited by insufficient segmentation of the peripheral airways.

OBJECTIVES:
To test the effect of applying positive airway pressure (PAP) during CT acquisition to improve segmentation, particularly at end-expiration.

METHODS:
CT acquisitions in inspiration and expiration with 4 PAP protocols were recorded prospectively and compared to baseline inspiratory acquisitions in 20 patients. The 4 protocols explored differences between devices (flow vs. turbine), exposures (within seconds vs. 15-min) and pressure levels (10 vs. 14 cmH2O). Segmentation quality was evaluated with the number of airways and number of endpoints reached. A generalized mixed-effects model explored the estimated effect of each protocol.

MEASUREMENTS AND MAIN RESULTS:
Patient characteristics and lung function did not significantly differ between protocols. Compared to baseline inspiratory acquisitions, expiratory acquisitions after 15 min of 14 cmH2O PAP segmented 1.63-fold more airways (95% CI 1.07-2.48; p = 0.018) and reached 1.34-fold more endpoints (95% CI 1.08-1.66; p = 0.004). Inspiratory acquisitions performed immediately under 10 cmH2O PAP reached 1.20-fold (95% CI 1.09-1.33; p < 0.001) more endpoints; after 15 min the increase was 1.14-fold (95% CI 1.05-1.24; p < 0.001).

CONCLUSIONS:
CT acquisitions with PAP segment more airways and reach more endpoints than baseline inspiratory acquisitions. The improvement is particularly evident at end-expiration after 15 min of 14 cmH2O PAP. Further studies must confirm that the improvement increases diagnostic yield when using VBN to evaluate peripheral pulmonary lesions.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.145 Approved no  
  Call Number Admin @ si @ DGT2018 Serial 3135  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: