|   | 
Details
   web
Records
Author Frederic Sampedro; Sergio Escalera; Anna Domenech; Ignasi Carrio
Title Automatic Tumor Volume Segmentation in Whole-Body PET/CT Scans: A Supervised Learning Approach Source Type Journal Article
Year 2015 Publication Journal of Medical Imaging and Health Informatics Abbreviated Journal JMIHI
Volume 5 Issue 2 Pages 192-201
Keywords CONTEXTUAL CLASSIFICATION; PET/CT; SUPERVISED LEARNING; TUMOR SEGMENTATION; WHOLE BODY
Abstract Whole-body 3D PET/CT tumoral volume segmentation provides relevant diagnostic and prognostic information in clinical oncology and nuclear medicine. Carrying out this procedure manually by a medical expert is time consuming and suffers from inter- and intra-observer variabilities. In this paper, a completely automatic approach to this task is presented. First, the problem is stated and described both in clinical and technological terms. Then, a novel supervised learning segmentation framework is introduced. The segmentation by learning approach is defined within a Cascade of Adaboost classifiers and a 3D contextual proposal of Multiscale Stacked Sequential Learning. Segmentation accuracy results on 200 Breast Cancer whole body PET/CT volumes show mean 49% sensitivity, 99.993% specificity and 39% Jaccard overlap Index, which represent good performance results both at the clinical and technological level.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ SED2015 Serial 2584
Permanent link to this record
 

 
Author David Berga; Xavier Otazu
Title A neurodynamic model of saliency prediction in v1 Type Journal Article
Year 2022 Publication Neural Computation Abbreviated Journal NEURALCOMPUT
Volume 34 Issue 2 Pages 378-414
Keywords
Abstract Lateral connections in the primary visual cortex (V1) have long been hypothesized to be responsible for several visual processing mechanisms such as brightness induction, chromatic induction, visual discomfort, and bottom-up visual attention (also named saliency). Many computational models have been developed to independently predict these and other visual processes, but no computational model has been able to reproduce all of them simultaneously. In this work, we show that a biologically plausible computational model of lateral interactions of V1 is able to simultaneously predict saliency and all the aforementioned visual processes. Our model's architecture (NSWAM) is based on Penacchio's neurodynamic model of lateral connections of V1. It is defined as a network of firing rate neurons, sensitive to visual features such as brightness, color, orientation, and scale. We tested NSWAM saliency predictions using images from several eye tracking data sets. We show that the accuracy of predictions obtained by our architecture, using shuffled metrics, is similar to other state-of-the-art computational methods, particularly with synthetic images (CAT2000-Pattern and SID4VAM) that mainly contain low-level features. Moreover, we outperform other biologically inspired saliency models that are specifically designed to exclusively reproduce saliency. We show that our biologically plausible model of lateral connections can simultaneously explain different visual processes present in V1 (without applying any type of training or optimization and keeping the same parameterization for all the visual processes). This can be useful for the definition of a unified architecture of the primary visual cortex.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.128; 600.120 Approved no
Call Number Admin @ si @ BeO2022 Serial 3696
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Debora Gil; Cristian Tebe; Carles Sanchez
Title Positive Airway Pressure to Enhance Computed Tomography Imaging for Airway Segmentation for Virtual Bronchoscopic Navigation Type Journal Article
Year 2018 Publication Respiration Abbreviated Journal RES
Volume 96 Issue 6 Pages 525-534
Keywords Multidetector computed tomography; Bronchoscopy; Continuous positive airway pressure; Image enhancement; Virtual bronchoscopic navigation
Abstract Abstract
RATIONALE:
Virtual bronchoscopic navigation (VBN) guidance to peripheral pulmonary lesions is often limited by insufficient segmentation of the peripheral airways.

OBJECTIVES:
To test the effect of applying positive airway pressure (PAP) during CT acquisition to improve segmentation, particularly at end-expiration.

METHODS:
CT acquisitions in inspiration and expiration with 4 PAP protocols were recorded prospectively and compared to baseline inspiratory acquisitions in 20 patients. The 4 protocols explored differences between devices (flow vs. turbine), exposures (within seconds vs. 15-min) and pressure levels (10 vs. 14 cmH2O). Segmentation quality was evaluated with the number of airways and number of endpoints reached. A generalized mixed-effects model explored the estimated effect of each protocol.

MEASUREMENTS AND MAIN RESULTS:
Patient characteristics and lung function did not significantly differ between protocols. Compared to baseline inspiratory acquisitions, expiratory acquisitions after 15 min of 14 cmH2O PAP segmented 1.63-fold more airways (95% CI 1.07-2.48; p = 0.018) and reached 1.34-fold more endpoints (95% CI 1.08-1.66; p = 0.004). Inspiratory acquisitions performed immediately under 10 cmH2O PAP reached 1.20-fold (95% CI 1.09-1.33; p < 0.001) more endpoints; after 15 min the increase was 1.14-fold (95% CI 1.05-1.24; p < 0.001).

CONCLUSIONS:
CT acquisitions with PAP segment more airways and reach more endpoints than baseline inspiratory acquisitions. The improvement is particularly evident at end-expiration after 15 min of 14 cmH2O PAP. Further studies must confirm that the improvement increases diagnostic yield when using VBN to evaluate peripheral pulmonary lesions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.145 Approved no
Call Number Admin @ si @ DGT2018 Serial 3135
Permanent link to this record
 

 
Author Debora Gil; Rosa Maria Ortiz; Carles Sanchez; Antoni Rosell
Title Objective endoscopic measurements of central airway stenosis. A pilot study Type Journal Article
Year 2018 Publication Respiration Abbreviated Journal RES
Volume 95 Issue Pages 63–69
Keywords Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis
Abstract Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.075; 600.096; 600.145 Approved no
Call Number Admin @ si @ GOS2018 Serial 3043
Permanent link to this record
 

 
Author Onur Ferhat; Fernando Vilariño
Title Low Cost Eye Tracking: The Current Panorama Type Journal Article
Year 2016 Publication Computational Intelligence and Neuroscience Abbreviated Journal CIN
Volume Issue Pages Article ID 8680541
Keywords
Abstract Despite the availability of accurate, commercial gaze tracker devices working with infrared (IR) technology, visible light gaze tracking constitutes an interesting alternative by allowing scalability and removing hardware requirements. Over the last years, this field has seen examples of research showing performance comparable to the IR alternatives. In this work, we survey the previous work on remote, visible light gaze trackers and analyze the explored techniques from various perspectives such as calibration strategies, head pose invariance, and gaze estimation techniques. We also provide information on related aspects of research such as public datasets to test against, open source projects to build upon, and gaze tracking services to directly use in applications. With all this information, we aim to provide the contemporary and future researchers with a map detailing previously explored ideas and the required tools.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV; 605.103; 600.047; 600.097;SIAI Approved no
Call Number Admin @ si @ FeV2016 Serial 2744
Permanent link to this record
 

 
Author Wenjuan Gong; W.Zhang; Jordi Gonzalez; Y.Ren; Z.Li
Title Enhanced Asymmetric Bilinear Model for Face Recognition Type Journal Article
Year 2015 Publication International Journal of Distributed Sensor Networks Abbreviated Journal IJDSN
Volume Issue Pages Article ID 218514
Keywords
Abstract Bilinear models have been successfully applied to separate two factors, for example, pose variances and different identities in face recognition problems. Asymmetric model is a type of bilinear model which models a system in the most concise way. But seldom there are works exploring the applications of asymmetric bilinear model on face recognition problem with illumination changes. In this work, we propose enhanced asymmetric model for illumination-robust face recognition. Instead of initializing the factor probabilities randomly, we initialize them with nearest neighbor method and optimize them for the test data. Above that, we update the factor model to be identified. We validate the proposed method on a designed data sample and extended Yale B dataset. The experiment results show that the enhanced asymmetric models give promising results and good recognition accuracies.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.063; 600.078 Approved no
Call Number Admin @ si @ GZG2015 Serial 2592
Permanent link to this record
 

 
Author Ariel Amato; Mikhail Mozerov; Xavier Roca; Jordi Gonzalez
Title Robust Real-Time Background Subtraction Based on Local Neighborhood Patterns Type Journal Article
Year 2010 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ
Volume Issue Pages 7
Keywords
Abstract Article ID 901205
This paper describes an efficient background subtraction technique for detecting moving objects. The proposed approach is able to overcome difficulties like illumination changes and moving shadows. Our method introduces two discriminative features based on angular and modular patterns, which are formed by similarity measurement between two sets of RGB color vectors: one belonging to the background image and the other to the current image. We show how these patterns are used to improve foreground detection in the presence of moving shadows and in the case when there are strong similarities in color between background and foreground pixels. Experimental results over a collection of public and own datasets of real image sequences demonstrate that the proposed technique achieves a superior performance compared with state-of-the-art methods. Furthermore, both the low computational and space complexities make the presented algorithm feasible for real-time applications.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1110-8657 ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number ISE @ ise @ AMR2010 Serial 1463
Permanent link to this record
 

 
Author Mirko Arnold; Anarta Ghosh; Stephen Ameling; G Lacey
Title Automatic segmentation and inpainting of specular highlights for endoscopic imaging Type Journal Article
Year 2010 Publication EURASIP Journal on Image and Video Processing Abbreviated Journal EURASIP JIVP
Volume 2010 Issue 9 Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference
Notes MV Approved no
Call Number fernando @ fernando @ Serial 2423
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria; Maria Teresa Anguera
Title Automatic Detection of Dominance and Expected Interest Type Journal Article
Year 2010 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ
Volume Issue Pages 12
Keywords
Abstract Article ID 491819
Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1110-8657 ISBN Medium
Area Expedition Conference
Notes OR;MILAB;HUPBA;MV Approved no
Call Number BCNPCL @ bcnpcl @ EPR2010d Serial 1283
Permanent link to this record
 

 
Author Rozenn Dhayot; Fernando Vilariño; Gerard Lacey
Title Improving the Quality of Color Colonoscopy Videos Type Journal Article
Year 2008 Publication EURASIP Journal on Image and Video Processing Abbreviated Journal EURASIP JIVP
Volume 139429 Issue 1 Pages 1-9
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference
Notes MV;SIAI Approved no
Call Number fernando @ fernando @ Serial 2422
Permanent link to this record
 

 
Author Carolina Malagelada; Michal Drozdzal; Santiago Segui; Sara Mendez; Jordi Vitria; Petia Radeva; Javier Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz
Title Classification of functional bowel disorders by objective physiological criteria based on endoluminal image analysis Type Journal Article
Year 2015 Publication American Journal of Physiology-Gastrointestinal and Liver Physiology Abbreviated Journal AJPGI
Volume 309 Issue 6 Pages G413--G419
Keywords capsule endoscopy; computer vision analysis; functional bowel disorders; intestinal motility; machine learning
Abstract We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.
Address
Corporate Author Thesis
Publisher American Physiological Society Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR;MV Approved no
Call Number Admin @ si @ MDS2015 Serial 2666
Permanent link to this record
 

 
Author Hugo Bertiche; Meysam Madadi; Sergio Escalera
Title Neural Cloth Simulation Type Journal Article
Year 2022 Publication ACM Transactions on Graphics Abbreviated Journal ACMTGraph
Volume 41 Issue 6 Pages 1-14
Keywords
Abstract We present a general framework for the garment animation problem through unsupervised deep learning inspired in physically based simulation. Existing trends in the literature already explore this possibility. Nonetheless, these approaches do not handle cloth dynamics. Here, we propose the first methodology able to learn realistic cloth dynamics unsupervisedly, and henceforth, a general formulation for neural cloth simulation. The key to achieve this is to adapt an existing optimization scheme for motion from simulation based methodologies to deep learning. Then, analyzing the nature of the problem, we devise an architecture able to automatically disentangle static and dynamic cloth subspaces by design. We will show how this improves model performance. Additionally, this opens the possibility of a novel motion augmentation technique that greatly improves generalization. Finally, we show it also allows to control the level of motion in the predictions. This is a useful, never seen before, tool for artists. We provide of detailed analysis of the problem to establish the bases of neural cloth simulation and guide future research into the specifics of this domain.



ACM Transactions on GraphicsVolume 41Issue 6December 2022 Article No.: 220pp 1–
Address Dec 2022
Corporate Author Thesis
Publisher ACM Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number Admin @ si @ BME2022b Serial 3779
Permanent link to this record
 

 
Author Wenjuan Gong; Zhang Yue; Wei Wang; Cheng Peng; Jordi Gonzalez
Title Meta-MMFNet: Meta-Learning Based Multi-Model Fusion Network for Micro-Expression Recognition Type Journal Article
Year 2022 Publication ACM Transactions on Multimedia Computing, Communications, and Applications Abbreviated Journal ACMTMC
Volume Issue Pages
Keywords Feature Fusion; Model Fusion; Meta-Learning; Micro-Expression Recognition
Abstract Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.
Address May 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.157 Approved no
Call Number Admin @ si @ GYW2022 Serial 3692
Permanent link to this record
 

 
Author Alex Falcon; Swathikiran Sudhakaran; Giuseppe Serra; Sergio Escalera; Oswald Lanz
Title Relevance-based Margin for Contrastively-trained Video Retrieval Models Type Conference Article
Year 2022 Publication ICMR '22: Proceedings of the 2022 International Conference on Multimedia Retrieval Abbreviated Journal
Volume Issue Pages 146-157
Keywords
Abstract Video retrieval using natural language queries has attracted increasing interest due to its relevance in real-world applications, from intelligent access in private media galleries to web-scale video search. Learning the cross-similarity of video and text in a joint embedding space is the dominant approach. To do so, a contrastive loss is usually employed because it organizes the embedding space by putting similar items close and dissimilar items far. This framework leads to competitive recall rates, as they solely focus on the rank of the groundtruth items. Yet, assessing the quality of the ranking list is of utmost importance when considering intelligent retrieval systems, since multiple items may share similar semantics, hence a high relevance. Moreover, the aforementioned framework uses a fixed margin to separate similar and dissimilar items, treating all non-groundtruth items as equally irrelevant. In this paper we propose to use a variable margin: we argue that varying the margin used during training based on how much relevant an item is to a given query, i.e. a relevance-based margin, easily improves the quality of the ranking lists measured through nDCG and mAP. We demonstrate the advantages of our technique using different models on EPIC-Kitchens-100 and YouCook2. We show that even if we carefully tuned the fixed margin, our technique (which does not have the margin as a hyper-parameter) would still achieve better performance. Finally, extensive ablation studies and qualitative analysis support the robustness of our approach. Code will be released at \urlhttps://github.com/aranciokov/RelevanceMargin-ICMR22.
Address Newwark, NJ, USA, 27 June 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICMR
Notes HuPBA; no menciona Approved no
Call Number Admin @ si @ FSS2022 Serial 3808
Permanent link to this record
 

 
Author Danna Xue; Fei Yang; Pei Wang; Luis Herranz; Jinqiu Sun; Yu Zhu; Yanning Zhang
Title SlimSeg: Slimmable Semantic Segmentation with Boundary Supervision Type Conference Article
Year 2022 Publication 30th ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages 6539-6548
Keywords
Abstract Accurate semantic segmentation models typically require significant computational resources, inhibiting their use in practical applications. Recent works rely on well-crafted lightweight models to achieve fast inference. However, these models cannot flexibly adapt to varying accuracy and efficiency requirements. In this paper, we propose a simple but effective slimmable semantic segmentation (SlimSeg) method, which can be executed at different capacities during inference depending on the desired accuracy-efficiency tradeoff. More specifically, we employ parametrized channel slimming by stepwise downward knowledge distillation during training. Motivated by the observation that the differences between segmentation results of each submodel are mainly near the semantic borders, we introduce an additional boundary guided semantic segmentation loss to further improve the performance of each submodel. We show that our proposed SlimSeg with various mainstream networks can produce flexible models that provide dynamic adjustment of computational cost and better performance than independent models. Extensive experiments on semantic segmentation benchmarks, Cityscapes and CamVid, demonstrate the generalization ability of our framework.
Address Lisboa, Portugal, October 2022
Corporate Author Thesis
Publisher Association for Computing Machinery Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-9203-7 Medium
Area Expedition Conference MM
Notes MACO; 600.161; 601.400 Approved no
Call Number Admin @ si @ XYW2022 Serial 3758
Permanent link to this record