Home | [161–170] << 171 172 173 174 175 176 177 178 179 180 >> [181–190] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Marta Diez-Ferrer; Debora Gil; Cristian Tebe; Carles Sanchez | ||||
Title | Positive Airway Pressure to Enhance Computed Tomography Imaging for Airway Segmentation for Virtual Bronchoscopic Navigation | Type | Journal Article | ||
Year | 2018 | Publication | Respiration | Abbreviated Journal | RES |
Volume | 96 | Issue | 6 | Pages ![]() |
525-534 |
Keywords | Multidetector computed tomography; Bronchoscopy; Continuous positive airway pressure; Image enhancement; Virtual bronchoscopic navigation | ||||
Abstract | Abstract
RATIONALE: Virtual bronchoscopic navigation (VBN) guidance to peripheral pulmonary lesions is often limited by insufficient segmentation of the peripheral airways. OBJECTIVES: To test the effect of applying positive airway pressure (PAP) during CT acquisition to improve segmentation, particularly at end-expiration. METHODS: CT acquisitions in inspiration and expiration with 4 PAP protocols were recorded prospectively and compared to baseline inspiratory acquisitions in 20 patients. The 4 protocols explored differences between devices (flow vs. turbine), exposures (within seconds vs. 15-min) and pressure levels (10 vs. 14 cmH2O). Segmentation quality was evaluated with the number of airways and number of endpoints reached. A generalized mixed-effects model explored the estimated effect of each protocol. MEASUREMENTS AND MAIN RESULTS: Patient characteristics and lung function did not significantly differ between protocols. Compared to baseline inspiratory acquisitions, expiratory acquisitions after 15 min of 14 cmH2O PAP segmented 1.63-fold more airways (95% CI 1.07-2.48; p = 0.018) and reached 1.34-fold more endpoints (95% CI 1.08-1.66; p = 0.004). Inspiratory acquisitions performed immediately under 10 cmH2O PAP reached 1.20-fold (95% CI 1.09-1.33; p < 0.001) more endpoints; after 15 min the increase was 1.14-fold (95% CI 1.05-1.24; p < 0.001). CONCLUSIONS: CT acquisitions with PAP segment more airways and reach more endpoints than baseline inspiratory acquisitions. The improvement is particularly evident at end-expiration after 15 min of 14 cmH2O PAP. Further studies must confirm that the improvement increases diagnostic yield when using VBN to evaluate peripheral pulmonary lesions. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.145 | Approved | no | ||
Call Number | Admin @ si @ DGT2018 | Serial | 3135 | ||
Permanent link to this record | |||||
Author | Sanket Biswas; Pau Riba; Josep Llados; Umapada Pal | ||||
Title | Graph-Based Deep Generative Modelling for Document Layout Generation | Type | Conference Article | ||
Year | 2021 | Publication | 16th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 12917 | Issue | Pages ![]() |
525-537 | |
Keywords | |||||
Abstract | One of the major prerequisites for any deep learning approach is the availability of large-scale training data. When dealing with scanned document images in real world scenarios, the principal information of its content is stored in the layout itself. In this work, we have proposed an automated deep generative model using Graph Neural Networks (GNNs) to generate synthetic data with highly variable and plausible document layouts that can be used to train document interpretation systems, in this case, specially in digital mailroom applications. It is also the first graph-based approach for document layout generation task experimented on administrative document images, in this case, invoices. | ||||
Address | Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121; 600.140; 110.312 | Approved | no | ||
Call Number | Admin @ si @ BRL2021 | Serial | 3676 | ||
Permanent link to this record | |||||
Author | Felipe Lumbreras; Xavier Roca; Daniel Ponsa; Robert Benavente; Judit Martinez; Silvia Sanchez; Coen Antens; Juan J. Villanueva | ||||
Title | Visual Inspection of Safety Belts | Type | Conference Article | ||
Year | 2001 | Publication | International Conference on Quality Control by Artificial Vision | Abbreviated Journal | |
Volume | 2 | Issue | Pages ![]() |
526–531 | |
Keywords | |||||
Abstract | |||||
Address | France | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | QCAV | ||
Notes | ADAS;ISE;CIC | Approved | no | ||
Call Number | ADAS @ adas @ LRP2001 | Serial | 122 | ||
Permanent link to this record | |||||
Author | Olivier Penacchio | ||||
Title | Mixed Hodge Structures and Equivariant Sheaves on the Projective Plane | Type | Journal Article | ||
Year | 2011 | Publication | Mathematische Nachrichten | Abbreviated Journal | MN |
Volume | 284 | Issue | 4 | Pages ![]() |
526-542 |
Keywords | Mixed Hodge structures, equivariant sheaves, MSC (2010) Primary: 14C30, Secondary: 14F05, 14M25 | ||||
Abstract | We describe an equivalence of categories between the category of mixed Hodge structures and a category of equivariant vector bundles on a toric model of the complex projective plane which verify some semistability condition. We then apply this correspondence to define an invariant which generalizes the notion of R-split mixed Hodge structure and give calculations for the first group of cohomology of possibly non smooth or non-complete curves of genus 0 and 1. Finally, we describe some extension groups of mixed Hodge structures in terms of equivariant extensions of coherent sheaves. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | WILEY-VCH Verlag | Place of Publication | Editor | R. Mennicken | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1522-2616 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ Pen2011 | Serial | 1721 | ||
Permanent link to this record | |||||
Author | Jialuo Chen; Pau Riba; Alicia Fornes; Juan Mas; Josep Llados; Joana Maria Pujadas-Mora | ||||
Title | Word-Hunter: A Gamesourcing Experience to Validate the Transcription of Historical Manuscripts | Type | Conference Article | ||
Year | 2018 | Publication | 16th International Conference on Frontiers in Handwriting Recognition | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
528-533 | ||
Keywords | Crowdsourcing; Gamification; Handwritten documents; Performance evaluation | ||||
Abstract | Nowadays, there are still many handwritten historical documents in archives waiting to be transcribed and indexed. Since manual transcription is tedious and time consuming, the automatic transcription seems the path to follow. However, the performance of current handwriting recognition techniques is not perfect, so a manual validation is mandatory. Crowdsourcing is a good strategy for manual validation, however it is a tedious task. In this paper we analyze experiences based in gamification
in order to propose and design a gamesourcing framework that increases the interest of users. Then, we describe and analyze our experience when validating the automatic transcription using the gamesourcing application. Moreover, thanks to the combination of clustering and handwriting recognition techniques, we can speed up the validation while maintaining the performance. |
||||
Address | Niagara Falls, USA; August 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICFHR | ||
Notes | DAG; 600.097; 603.057; 600.121 | Approved | no | ||
Call Number | Admin @ si @ CRF2018 | Serial | 3169 | ||
Permanent link to this record | |||||
Author | Klaus Broelemann; Anjan Dutta; Xiaoyi Jiang; Josep Llados | ||||
Title | Hierarchical graph representation for symbol spotting in graphical document images | Type | Conference Article | ||
Year | 2012 | Publication | Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshop | Abbreviated Journal | |
Volume | 7626 | Issue | Pages ![]() |
529-538 | |
Keywords | |||||
Abstract | Symbol spotting can be defined as locating given query symbol in a large collection of graphical documents. In this paper we present a hierarchical graph representation for symbols. This representation allows graph matching methods to deal with low-level vectorization errors and, thus, to perform a robust symbol spotting. To show the potential of this approach, we conduct an experiment with the SESYD dataset. | ||||
Address | Miyajima-Itsukushima, Hiroshima | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-34165-6 | Medium | |
Area | Expedition | Conference | SSPR&SPR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ BDJ2012 | Serial | 2126 | ||
Permanent link to this record | |||||
Author | David Masip; Agata Lapedriza; Jordi Vitria | ||||
Title | Boosted Online Learning for Face Recognition | Type | Journal Article | ||
Year | 2009 | Publication | IEEE Transactions on Systems, Man and Cybernetics part B | Abbreviated Journal | TSMCB |
Volume | 39 | Issue | 2 | Pages ![]() |
530–538 |
Keywords | |||||
Abstract | Face recognition applications commonly suffer from three main drawbacks: a reduced training set, information lying in high-dimensional subspaces, and the need to incorporate new people to recognize. In the recent literature, the extension of a face classifier in order to include new people in the model has been solved using online feature extraction techniques. The most successful approaches of those are the extensions of the principal component analysis or the linear discriminant analysis. In the current paper, a new online boosting algorithm is introduced: a face recognition method that extends a boosting-based classifier by adding new classes while avoiding the need of retraining the classifier each time a new person joins the system. The classifier is learned using the multitask learning principle where multiple verification tasks are trained together sharing the same feature space. The new classes are added taking advantage of the structure learned previously, being the addition of new classes not computationally demanding. The present proposal has been (experimentally) validated with two different facial data sets by comparing our approach with the current state-of-the-art techniques. The results show that the proposed online boosting algorithm fares better in terms of final accuracy. In addition, the global performance does not decrease drastically even when the number of classes of the base problem is multiplied by eight. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1083–4419 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ MLV2009 | Serial | 1155 | ||
Permanent link to this record | |||||
Author | Raul Gomez; Lluis Gomez; Jaume Gibert; Dimosthenis Karatzas | ||||
Title | Learning from# Barcelona Instagram data what Locals and Tourists post about its Neighbourhoods | Type | Conference Article | ||
Year | 2018 | Publication | 15th European Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | 11134 | Issue | Pages ![]() |
530-544 | |
Keywords | |||||
Abstract | Massive tourism is becoming a big problem for some cities, such as Barcelona, due to its concentration in some neighborhoods. In this work we gather Instagram data related to Barcelona consisting on images-captions pairs and, using the text as a supervisory signal, we learn relations between images, words and neighborhoods. Our goal is to learn which visual elements appear in photos when people is posting about each neighborhood. We perform a language separate treatment of the data and show that it can be extrapolated to a tourists and locals separate analysis, and that tourism is reflected in Social Media at a neighborhood level. The presented pipeline allows analyzing the differences between the images that tourists and locals associate to the different neighborhoods. The proposed method, which can be extended to other cities or subjects, proves that Instagram data can be used to train multi-modal (image and text) machine learning models that are useful to analyze publications about a city at a neighborhood level. We publish the collected dataset, InstaBarcelona and the code used in the analysis. | ||||
Address | Munich; Alemanya; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | DAG; 600.129; 601.338; 600.121 | Approved | no | ||
Call Number | Admin @ si @ GGG2018b | Serial | 3176 | ||
Permanent link to this record | |||||
Author | Thomas B. Moeslund; Sergio Escalera; Gholamreza Anbarjafari; Kamal Nasrollahi; Jun Wan | ||||
Title | Statistical Machine Learning for Human Behaviour Analysis | Type | Journal Article | ||
Year | 2020 | Publication | Entropy | Abbreviated Journal | ENTROPY |
Volume | 25 | Issue | 5 | Pages ![]() |
530 |
Keywords | action recognition; emotion recognition; privacy-aware | ||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ MEA2020 | Serial | 3441 | ||
Permanent link to this record | |||||
Author | Mohamed Ilyes Lakhal; Albert Clapes; Sergio Escalera; Oswald Lanz; Andrea Cavallaro | ||||
Title | Residual Stacked RNNs for Action Recognition | Type | Conference Article | ||
Year | 2018 | Publication | 9th International Workshop on Human Behavior Understanding | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
534-548 | ||
Keywords | Action recognition; Deep residual learning; Two-stream RNN | ||||
Abstract | Action recognition pipelines that use Recurrent Neural Networks (RNN) are currently 5–10% less accurate than Convolutional Neural Networks (CNN). While most works that use RNNs employ a 2D CNN on each frame to extract descriptors for action recognition, we extract spatiotemporal features from a 3D CNN and then learn the temporal relationship of these descriptors through a stacked residual recurrent neural network (Res-RNN). We introduce for the first time residual learning to counter the degradation problem in multi-layer RNNs, which have been successful for temporal aggregation in two-stream action recognition pipelines. Finally, we use a late fusion strategy to combine RGB and optical flow data of the two-stream Res-RNN. Experimental results show that the proposed pipeline achieves competitive results on UCF-101 and state of-the-art results for RNN-like architectures on the challenging HMDB-51 dataset. | ||||
Address | Munich; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ LCE2018b | Serial | 3206 | ||
Permanent link to this record | |||||
Author | Fadi Dornaika; Angel Sappa | ||||
Title | Instantaneous 3D motion from image derivatives using the Least Trimmed Square Regression | Type | Journal Article | ||
Year | 2009 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 30 | Issue | 5 | Pages ![]() |
535–543 |
Keywords | |||||
Abstract | This paper presents a new technique to the instantaneous 3D motion estimation. The main contributions are as follows. First, we show that the 3D camera or scene velocity can be retrieved from image derivatives only assuming that the scene contains a dominant plane. Second, we propose a new robust algorithm that simultaneously provides the Least Trimmed Square solution and the percentage of inliers-the non-contaminated data. Experiments on both synthetic and real image sequences demonstrated the effectiveness of the developed method. Those experiments show that the new robust approach can outperform classical robust schemes. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier Science Inc. | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0167-8655 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ DoS2009a | Serial | 1115 | ||
Permanent link to this record | |||||
Author | David Rotger; Petia Radeva; N. Bruining | ||||
Title | Automatic Detection of Bioabsorbable Coronary Stents in IVUS Images using a Cascade of Classifiers | Type | Journal Article | ||
Year | 2010 | Publication | IEEE Transactions on Information Technology in Biomedicine | Abbreviated Journal | TITB |
Volume | 14 | Issue | 2 | Pages ![]() |
535 – 537 |
Keywords | |||||
Abstract | Bioabsorbable drug-eluting coronary stents present a very promising improvement to the common metallic ones solving some of the most important problems of stent implantation: the late restenosis. These stents made of poly-L-lactic acid cause a very subtle acoustic shadow (compared to the metallic ones) making difficult the automatic detection and measurements in images. In this paper, we propose a novel approach based on a cascade of GentleBoost classifiers to detect the stent struts using structural features to code the information of the different subregions of the struts. A stochastic gradient descent method is applied to optimize the overall performance of the detector. Validation results of struts detection are very encouraging with an average F-measure of 81%. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ RRB2010 | Serial | 1287 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Nadezhda Zlateva; Alexander Marinov; Miguel Reyes; Petia Radeva; Dimo Dimov; Sergio Escalera | ||||
Title | Human Limb Segmentation in Depth Maps based on Spatio-Temporal Graph Cuts Optimization | Type | Journal Article | ||
Year | 2012 | Publication | Journal of Ambient Intelligence and Smart Environments | Abbreviated Journal | JAISE |
Volume | 4 | Issue | 6 | Pages ![]() |
535-546 |
Keywords | Multi-modal vision processing; Random Forest; Graph-cuts; multi-label segmentation; human body segmentation | ||||
Abstract | We present a framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α−β swap Graph-cuts algorithm. Moreover, depth values of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1876-1364 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ HZM2012a | Serial | 2006 | ||
Permanent link to this record | |||||
Author | Wenwen Yu; Chengquan Zhang; Haoyu Cao; Wei Hua; Bohan Li; Huang Chen; Mingyu Liu; Mingrui Chen; Jianfeng Kuang; Mengjun Cheng; Yuning Du; Shikun Feng; Xiaoguang Hu; Pengyuan Lyu; Kun Yao; Yuechen Yu; Yuliang Liu; Wanxiang Che; Errui Ding; Cheng-Lin Liu; Jiebo Luo; Shuicheng Yan; Min Zhang; Dimosthenis Karatzas; Xing Sun; Jingdong Wang; Xiang Bai | ||||
Title | ICDAR 2023 Competition on Structured Text Extraction from Visually-Rich Document Images | Type | Conference Article | ||
Year | 2023 | Publication | 17th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 14188 | Issue | Pages ![]() |
536–552 | |
Keywords | |||||
Abstract | Structured text extraction is one of the most valuable and challenging application directions in the field of Document AI. However, the scenarios of past benchmarks are limited, and the corresponding evaluation protocols usually focus on the submodules of the structured text extraction scheme. In order to eliminate these problems, we organized the ICDAR 2023 competition on Structured text extraction from Visually-Rich Document images (SVRD). We set up two tracks for SVRD including Track 1: HUST-CELL and Track 2: Baidu-FEST, where HUST-CELL aims to evaluate the end-to-end performance of Complex Entity Linking and Labeling, and Baidu-FEST focuses on evaluating the performance and generalization of Zero-shot/Few-shot Structured Text extraction from an end-to-end perspective. Compared to the current document benchmarks, our two tracks of competition benchmark enriches the scenarios greatly and contains more than 50 types of visually-rich document images (mainly from the actual enterprise applications). The competition opened on 30th December, 2022 and closed on 24th March, 2023. There are 35 participants and 91 valid submissions received for Track 1, and 15 participants and 26 valid submissions received for Track 2. In this report we will presents the motivation, competition datasets, task definition, evaluation protocol, and submission summaries. According to the performance of the submissions, we believe there is still a large gap on the expected information extraction performance for complex and zero-shot scenarios. It is hoped that this competition will attract many researchers in the field of CV and NLP, and bring some new thoughts to the field of Document AI. | ||||
Address | San Jose; CA; USA; August 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ YZC2023 | Serial | 3896 | ||
Permanent link to this record | |||||
Author | Arnau Ramisa; Adriana Tapus; Ramon Lopez de Mantaras; Ricardo Toledo | ||||
Title | Mobile Robot Localization using Panoramic Vision and Combination of Feature Region Detectors | Type | Conference Article | ||
Year | 2008 | Publication | IEEE International Conference on Robotics and Automation, | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
538–543 | ||
Keywords | |||||
Abstract | |||||
Address | Pasadena; CA; USA | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICRA | ||
Notes | RV;ADAS | Approved | no | ||
Call Number | Admin @ si @ RTL2008 | Serial | 1144 | ||
Permanent link to this record |