|   | 
Details
   web
Records
Author Anastasios Doulamis; Nikolaos Doulamis; Marco Bertini; Jordi Gonzalez; Thomas B. Moeslund
Title Introduction to the Special Issue on the Analysis and Retrieval of Events/Actions and Workflows in Video Streams Type Journal Article
Year 2016 Publication Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 75 Issue 22 Pages 14985-14990
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ISE; HUPBA Approved no
Call Number Admin @ si @ DDB2016 Serial 2934
Permanent link to this record
 

 
Author H. Martin ; Jens Fagertun; Sergio Vera; Debora Gil
Title Medial structure generation for registration of anatomical structures Type Book Chapter
Year 2017 Publication Skeletonization, Theory, Methods and Applications Abbreviated Journal
Volume 11 Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes IAM; 600.096; 600.075; 600.145 Approved no
Call Number Admin @ si @ MFV2017a Serial 2935
Permanent link to this record
 

 
Author Antonio Lopez; Atsushi Imiya; Tomas Pajdla; Jose Manuel Alvarez
Title Computer Vision in Vehicle Technology: Land, Sea & Air Type Book Whole
Year 2017 Publication Abbreviated Journal
Volume Issue Pages 161-163
Keywords
Abstract Summary This chapter examines different vision-based commercial solutions for real-live problems related to vehicles. It is worth mentioning the recent astonishing performance of deep convolutional neural networks (DCNNs) in difficult visual tasks such as image classification, object recognition/localization/detection, and semantic segmentation. In fact,
different DCNN architectures are already being explored for low-level tasks such as optical flow and disparity computation, and higher level ones such as place recognition.
Address
Corporate Author Thesis
Publisher John Wiley & Sons, Ltd Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-118-86807-2 Medium
Area Expedition Conference (up)
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ LIP2017a Serial 2937
Permanent link to this record
 

 
Author David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville
Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Journal Article
Year 2017 Publication Journal of Healthcare Engineering Abbreviated Journal JHCE
Volume Issue Pages 2040-2295
Keywords Colonoscopy images; Deep Learning; Semantic Segmentation
Abstract Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no
Call Number VBS2017b Serial 2940
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Debora Gil; Elena Carreño; Susana Padrones; Samantha Aso; Vanesa Vicens; Noelia Cubero de Frutos; Rosa Lopez Lisbona; Carles Sanchez; Agnes Borras; Antoni Rosell
Title Positive Airway Pressure-Enhanced CT to Improve Virtual Bronchoscopic Navigation Type Journal Article
Year 2017 Publication European Respiratory Journal Abbreviated Journal ERJ
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes IAM Approved no
Call Number Admin @ si @ DGC2017b Serial 3632
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades
Title Sparse representation over learned dictionary for symbol recognition Type Journal Article
Year 2016 Publication Signal Processing Abbreviated Journal SP
Volume 125 Issue Pages 36-47
Keywords Symbol Recognition; Sparse Representation; Learned Dictionary; Shape Context; Interest Points
Abstract In this paper we propose an original sparse vector model for symbol retrieval task. More speci cally, we apply the K-SVD algorithm for learning a visual dictionary based on symbol descriptors locally computed around interest points. Results on benchmark datasets show that the obtained sparse representation is competitive related to state-of-the-art methods. Moreover, our sparse representation is invariant to rotation and scale transforms and also robust to degraded images and distorted symbols. Thereby, the learned visual dictionary is able to represent instances of unseen classes of symbols.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes DAG; 600.061; 600.077 Approved no
Call Number Admin @ si @ DTR2016 Serial 2946
Permanent link to this record
 

 
Author Jorge Bernal; Nima Tajkbaksh; F. Javier Sanchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjorn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Cordova; Cristina Sanchez Montes; Suryakanth R. Gurudu; Gloria Fernandez Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace
Title Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results from the MICCAI 2015 Endoscopic Vision Challenge Type Journal Article
Year 2017 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI
Volume 36 Issue 6 Pages 1231 - 1249
Keywords Endoscopic vision; Polyp Detection; Handcrafted features; Machine Learning; Validation Framework
Abstract Colonoscopy is the gold standard for colon cancer screening though still some polyps are missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack
of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection subchallenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted
Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks (CNNs) are the state of the art. Nevertheless it is also demonstrated that combining different methodologies can lead to an improved overall performance.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes MV; 600.096; 600.075 Approved no
Call Number Admin @ si @ BTS2017 Serial 2949
Permanent link to this record
 

 
Author Pau Riba; Alicia Fornes; Josep Llados
Title Towards the Alignment of Handwritten Music Scores Type Book Chapter
Year 2017 Publication International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges Abbreviated Journal
Volume 9657 Issue Pages 103-116
Keywords Optical Music Recognition; Handwritten Music Scores; Dynamic Time Warping alignment
Abstract It is very common to nd di erent versions of the same music work in archives of Opera Theaters. These di erences correspond to modi cations and annotations from the musicians. From the musicologist point of view, these variations are very interesting and deserve study.
This paper explores the alignment of music scores as a tool for automatically detecting the passages that contain such di erences. Given the diculties in the recognition of handwritten music scores, our goal is to align the music scores and at the same time, avoid the recognition of music elements as much as possible. After removing the sta lines, braces and ties, the bar lines are detected. Then, the bar units are described as a whole using the Blurred Shape Model. The bar units alignment is performed by using Dynamic Time Warping. The analysis of the alignment path is used to detect the variations in the music scores. The method has been evaluated on a subset of the CVC-MUSCIMA dataset, showing encouraging results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor Bart Lamiroy; R Dueire Lins
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-319-52158-9 Medium
Area Expedition Conference (up)
Notes DAG; 600.097; 602.006; 600.121 Approved no
Call Number Admin @ si @ RFL2017 Serial 2955
Permanent link to this record
 

 
Author Joan Serrat; Felipe Lumbreras; Francisco Blanco; Manuel Valiente; Montserrat Lopez-Mesas
Title myStone: A system for automatic kidney stone classification Type Journal Article
Year 2017 Publication Expert Systems with Applications Abbreviated Journal ESA
Volume 89 Issue Pages 41-51
Keywords Kidney stone; Optical device; Computer vision; Image classification
Abstract Kidney stone formation is a common disease and the incidence rate is constantly increasing worldwide. It has been shown that the classification of kidney stones can lead to an important reduction of the recurrence rate. The classification of kidney stones by human experts on the basis of certain visual color and texture features is one of the most employed techniques. However, the knowledge of how to analyze kidney stones is not widespread, and the experts learn only after being trained on a large number of samples of the different classes. In this paper we describe a new device specifically designed for capturing images of expelled kidney stones, and a method to learn and apply the experts knowledge with regard to their classification. We show that with off the shelf components, a carefully selected set of features and a state of the art classifier it is possible to automate this difficult task to a good degree. We report results on a collection of 454 kidney stones, achieving an overall accuracy of 63% for a set of eight classes covering almost all of the kidney stones taxonomy. Moreover, for more than 80% of samples the real class is the first or the second most probable class according to the system, being then the patient recommendations for the two top classes similar. This is the first attempt towards the automatic visual classification of kidney stones, and based on the current results we foresee better accuracies with the increase of the dataset size.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ADAS; MSIAU; 603.046; 600.122; 600.118 Approved no
Call Number Admin @ si @ SLB2017 Serial 3026
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Oliver Valero; Francesca Vidal; Zaida Sarrate
Title Unraveling the enigmas of chromosome territoriality during spermatogenesis Type Conference Article
Year 2017 Publication IX Jornada del Departament de Biologia Cel•lular, Fisiologia i Immunologia Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address UAB; Barcelona; June 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes IAM; 600.145 Approved no
Call Number Admin @ si @ SBG2017b Serial 2959
Permanent link to this record
 

 
Author Pau Rodriguez; Guillem Cucurull; Josep M. Gonfaus; Xavier Roca; Jordi Gonzalez
Title Age and gender recognition in the wild with deep attention Type Journal Article
Year 2017 Publication Pattern Recognition Abbreviated Journal PR
Volume 72 Issue Pages 563-571
Keywords Age recognition; Gender recognition; Deep neural networks; Attention mechanisms
Abstract Face analysis in images in the wild still pose a challenge for automatic age and gender recognition tasks, mainly due to their high variability in resolution, deformation, and occlusion. Although the performance has highly increased thanks to Convolutional Neural Networks (CNNs), it is still far from optimal when compared to other image recognition tasks, mainly because of the high sensitiveness of CNNs to facial variations. In this paper, inspired by biology and the recent success of attention mechanisms on visual question answering and fine-grained recognition, we propose a novel feedforward attention mechanism that is able to discover the most informative and reliable parts of a given face for improving age and gender classification. In particular, given a downsampled facial image, the proposed model is trained based on a novel end-to-end learning framework to extract the most discriminative patches from the original high-resolution image. Experimental validation on the standard Adience, Images of Groups, and MORPH II benchmarks show that including attention mechanisms enhances the performance of CNNs in terms of robustness and accuracy.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ISE; 600.098; 602.133; 600.119 Approved no
Call Number Admin @ si @ RCG2017b Serial 2962
Permanent link to this record
 

 
Author Sergio Escalera; Jordi Gonzalez; Hugo Jair Escalante; Xavier Baro; Isabelle Guyon
Title Looking at People Special Issue Type Journal Article
Year 2018 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 126 Issue 2-4 Pages 141-143
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes HUPBA; ISE; 600.119 Approved no
Call Number Admin @ si @ EGJ2018 Serial 3093
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Francesca Vidal; Zaida Sarrate
Title Noves perspectives en l estudi de la territorialitat cromosomica de cel·lules germinals masculines: estudis tridimensionals Type Journal
Year 2017 Publication Biologia de la Reproduccio Abbreviated Journal JBR
Volume 15 Issue Pages 73-78
Keywords
Abstract In somatic cells, chromosomes occupy specific nuclear regions called chromosome territories which are involved in the
maintenance and regulation of the genome. Preliminary data in male germ cells also suggest the importance of chromosome
territoriality in cell functionality. Nevertheless, the specific characteristics of testicular tissue (presence of different
cell types with different morphological characteristics, in different stages of development and with different ploidy)
makes difficult to achieve conclusive results. In this study we have developed a methodology to approach the threedimensional
study of all chromosome territories in male germ cells from C57BL/6J mice (Mus musculus). The method
includes the following steps: i) Optimized cell fixation to obtain an optimal preservation of the three-dimensionality cell
morphology, ii) Chromosome identification by FISH (Chromoprobe Multiprobe® OctoChrome™ Murine System; Cytocell)
and confocal microscopy (TCS-SP5, Leica Microsystems), iii) Cell type identification by immunofluorescence
iv) Image analysis using Matlab scripts, v) Numerical data extraction related to chromosome features, chromosome
radial position and chromosome relative position. This methodology allows the unequivocally identification and the
analysis of the chromosome territories of all spermatogenic stages. Results will provide information about the features
that determine chromosomal position, preferred associations between chromosomes, and the relationship between chromosome
positioning and genome regulation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-697-3767-5 Medium
Area Expedition Conference (up)
Notes IAM; 600.096; 600.145 Approved no
Call Number Admin @ si @ SBG2017c Serial 2961
Permanent link to this record
 

 
Author Xinhang Song; Shuqiang Jiang; Luis Herranz
Title Multi-Scale Multi-Feature Context Modeling for Scene Recognition in the Semantic Manifold Type Journal Article
Year 2017 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 26 Issue 6 Pages 2721-2735
Keywords
Abstract Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top-down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ SJH2017a Serial 2963
Permanent link to this record
 

 
Author Weiqing Min; Shuqiang Jiang; Jitao Sang; Huayang Wang; Xinda Liu; Luis Herranz
Title Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration Type Journal Article
Year 2017 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM
Volume 19 Issue 5 Pages 1100 - 1113
Keywords
Abstract This paper considers the problem of recipe-oriented image-ingredient correlation learning with multi-attributes for recipe retrieval and exploration. Existing methods mainly focus on food visual information for recognition while we model visual information, textual content (e.g., ingredients), and attributes (e.g., cuisine and course) together to solve extended recipe-oriented problems, such as multimodal cuisine classification and attribute-enhanced food image retrieval. As a solution, we propose a multimodal multitask deep belief network (M3TDBN) to learn joint image-ingredient representation regularized by different attributes. By grouping ingredients into visible ingredients (which are visible in the food image, e.g., “chicken” and “mushroom”) and nonvisible ingredients (e.g., “salt” and “oil”), M3TDBN is capable of learning both midlevel visual representation between images and visible ingredients and nonvisual representation. Furthermore, in order to utilize different attributes to improve the intermodality correlation, M3TDBN incorporates multitask learning to make different attributes collaborate each other. Based on the proposed M3TDBN, we exploit the derived deep features and the discovered correlations for three extended novel applications: 1) multimodal cuisine classification; 2) attribute-augmented cross-modal recipe image retrieval; and 3) ingredient and attribute inference from food images. The proposed approach is evaluated on the constructed Yummly dataset and the evaluation results have validated the effectiveness of the proposed approach.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ MJS2017 Serial 2964
Permanent link to this record