|   | 
Details
   web
Records
Author German Ros; Laura Sellart; Gabriel Villalonga; Elias Maidanik; Francisco Molero; Marc Garcia; Adriana Cedeño; Francisco Perez; Didier Ramirez; Eduardo Escobar; Jose Luis Gomez; David Vazquez; Antonio Lopez
Title Semantic Segmentation of Urban Scenes via Domain Adaptation of SYNTHIA Type Book Chapter
Year 2017 Publication Domain Adaptation in Computer Vision Applications Abbreviated Journal
Volume 12 Issue Pages 227-241
Keywords SYNTHIA; Virtual worlds; Autonomous Driving
Abstract Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images; thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this chapter, we propose to use a combination of a virtual world to automatically generate realistic synthetic images with pixel-level annotations, and domain adaptation to transfer the models learnt to correctly operate in real scenarios. We address the question of how useful synthetic data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations and object identifiers. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show that combining SYNTHIA with simple domain adaptation techniques in the training stage significantly improves performance on semantic segmentation.
Address
Corporate Author Thesis
Publisher Springer Place of Publication Editor Gabriela Csurka
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ADAS; 600.085; 600.082; 600.076; 600.118 Approved no
Call Number ADAS @ adas @ RSV2017 Serial 2882
Permanent link to this record
 

 
Author Lluis Gomez; Dimosthenis Karatzas
Title TextProposals: a Text‐specific Selective Search Algorithm for Word Spotting in the Wild Type Journal Article
Year 2017 Publication Pattern Recognition Abbreviated Journal PR
Volume 70 Issue Pages 60-74
Keywords
Abstract Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (Goel et al., 2013; Almazán et al., 2014; Jaderberg et al., 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way.

Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (Almazán et al., 2014; Jaderberg et al., 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10% F-score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https://github.com/lluisgomez/TextProposals.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes DAG; 600.084; 601.197; 600.121; 600.129 Approved no
Call Number Admin @ si @ GoK2017 Serial 2886
Permanent link to this record
 

 
Author Lluis Gomez; Anguelos Nicolaou; Dimosthenis Karatzas
Title Improving patch‐based scene text script identification with ensembles of conjoined networks Type Journal Article
Year 2017 Publication Pattern Recognition Abbreviated Journal PR
Volume 67 Issue Pages 85-96
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes DAG; 600.084; 600.121; 600.129 Approved no
Call Number Admin @ si @ GNK2017 Serial 2887
Permanent link to this record
 

 
Author Ivet Rafegas; Javier Vazquez; Robert Benavente; Maria Vanrell; Susana Alvarez
Title Enhancing spatio-chromatic representation with more-than-three color coding for image description Type Journal Article
Year 2017 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A
Volume 34 Issue 5 Pages 827-837
Keywords
Abstract Extraction of spatio-chromatic features from color images is usually performed independently on each color channel. Usual 3D color spaces, such as RGB, present a high inter-channel correlation for natural images. This correlation can be reduced using color-opponent representations, but the spatial structure of regions with small color differences is not fully captured in two generic Red-Green and Blue-Yellow channels. To overcome these problems, we propose a new color coding that is adapted to the specific content of each image. Our proposal is based on two steps: (a) setting the number of channels to the number of distinctive colors we find in each image (avoiding the problem of channel correlation), and (b) building a channel representation that maximizes contrast differences within each color channel (avoiding the problem of low local contrast). We call this approach more-than-three color coding (MTT) to enhance the fact that the number of channels is adapted to the image content. The higher color complexity an image has, the more channels can be used to represent it. Here we select distinctive colors as the most predominant in the image, which we call color pivots, and we build the new color coding using these color pivots as a basis. To evaluate the proposed approach we measure its efficiency in an image categorization task. We show how a generic descriptor improves its performance at the description level when applied on the MTT coding.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes CIC; 600.087 Approved no
Call Number Admin @ si @ RVB2017 Serial 2892
Permanent link to this record
 

 
Author Marçal Rusiñol; Josep Llados
Title Flowchart Recognition in Patent Information Retrieval Type Book Chapter
Year 2017 Publication Current Challenges in Patent Information Retrieval Abbreviated Journal
Volume 37 Issue Pages 351-368
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor M. Lupu; K. Mayer; N. Kando; A.J. Trippe
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes DAG; 600.097; 600.121 Approved no
Call Number Admin @ si @ RuL2017 Serial 2896
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; Angel Sappa; Cristhian Aguilera; Ricardo Toledo
Title Cross-Spectral Local Descriptors via Quadruplet Network Type Journal Article
Year 2017 Publication Sensors Abbreviated Journal SENS
Volume 17 Issue 4 Pages 873
Keywords
Abstract This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ADAS; 600.086; 600.118 Approved no
Call Number Admin @ si @ ASA2017 Serial 2914
Permanent link to this record
 

 
Author Mikhail Mozerov; Joost Van de Weijer
Title Improved Recursive Geodesic Distance Computation for Edge Preserving Filter Type Journal Article
Year 2017 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 26 Issue 8 Pages 3696 - 3706
Keywords Geodesic distance filter; color image filtering; image enhancement
Abstract All known recursive filters based on the geodesic distance affinity are realized by two 1D recursions applied in two orthogonal directions of the image plane. The 2D extension of the filter is not valid and has theoretically drawbacks, which lead to known artifacts. In this paper, a maximum influence propagation method is proposed to approximate the 2D extension for the
geodesic distance-based recursive filter. The method allows to partially overcome the drawbacks of the 1D recursion approach. We show that our improved recursion better approximates the true geodesic distance filter, and the application of this improved filter for image denoising outperforms the existing recursive implementation of the geodesic distance. As an application,
we consider a geodesic distance-based filter for image denoising.
Experimental evaluation of our denoising method demonstrates comparable and for several test images better results, than stateof-the-art approaches, while our algorithm is considerably fasterwith computational complexity O(8P).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes LAMP; ISE; 600.120; 600.098; 600.119 Approved no
Call Number Admin @ si @ Moz2017 Serial 2921
Permanent link to this record
 

 
Author Pau Rodriguez; Guillem Cucurull; Jordi Gonzalez; Josep M. Gonfaus; Kamal Nasrollahi; Thomas B. Moeslund; Xavier Roca
Title Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification Type Journal Article
Year 2017 Publication IEEE Transactions on cybernetics Abbreviated Journal Cyber
Volume Issue Pages 1-11
Keywords
Abstract Pain is an unpleasant feeling that has been shown to be an important factor for the recovery of patients. Since this is costly in human resources and difficult to do objectively, there is the need for automatic systems to measure it. In this paper, contrary to current state-of-the-art techniques in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first uses convolutional neural networks (CNNs) to learn facial features from VGG_Faces, which are then linked to a long short-term memory to exploit the temporal relation between video frames. We further compare the performances of using the so popular schema based on the canonically normalized appearance versus taking into account the whole image. As a result, we outperform current state-of-the-art area under the curve performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also report competitive results in the Cohn Kanade+ facial expression database.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ISE; 600.119; 600.098 Approved no
Call Number Admin @ si @ RCG2017a Serial 2926
Permanent link to this record
 

 
Author Karim Lekadir; Alfiia Galimzianova; Angels Betriu; Maria del Mar Vila; Laura Igual; Daniel L. Rubin; Elvira Fernandez-Giraldez; Petia Radeva; Sandy Napel
Title A Convolutional Neural Network for Automatic Characterization of Plaque Composition in Carotid Ultrasound Type Journal Article
Year 2017 Publication IEEE Journal Biomedical and Health Informatics Abbreviated Journal J-BHI
Volume 21 Issue 1 Pages 48-55
Keywords
Abstract Characterization of carotid plaque composition, more specifically the amount of lipid core, fibrous tissue, and calcified tissue, is an important task for the identification of plaques that are prone to rupture, and thus for early risk estimation of cardiovascular and cerebrovascular events. Due to its low costs and wide availability, carotid ultrasound has the potential to become the modality of choice for plaque characterization in clinical practice. However, its significant image noise, coupled with the small size of the plaques and their complex appearance, makes it difficult for automated techniques to discriminate between the different plaque constituents. In this paper, we propose to address this challenging problem by exploiting the unique capabilities of the emerging deep learning framework. More specifically, and unlike existing works which require a priori definition of specific imaging features or thresholding values, we propose to build a convolutional neural network (CNN) that will automatically extract from the images the information that is optimal for the identification of the different plaque constituents. We used approximately 90 000 patches extracted from a database of images and corresponding expert plaque characterizations to train and to validate the proposed CNN. The results of cross-validation experiments show a correlation of about 0.90 with the clinical assessment for the estimation of lipid core, fibrous cap, and calcified tissue areas, indicating the potential of deep learning for the challenging task of automatic characterization of plaque composition in carotid ultrasound.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ LGB2017 Serial 2931
Permanent link to this record
 

 
Author Umut Guclu; Yagmur Gucluturk; Meysam Madadi; Sergio Escalera; Xavier Baro; Jordi Gonzalez; Rob van Lier; Marcel A. J. van Gerven
Title End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks Type Miscellaneous
Year 2017 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract arXiv:1703.03305
Recent years have seen a sharp increase in the number of related yet distinct advances in semantic segmentation. Here, we tackle this problem by leveraging the respective strengths of these advances. That is, we formulate a conditional random field over a four-connected graph as end-to-end trainable convolutional and recurrent networks, and estimate them via an adversarial process. Importantly, our model learns not only unary potentials but also pairwise
potentials, while aggregating multi-scale contexts and controlling higher-order inconsistencies.
We evaluate our model on two standard benchmark datasets for semantic face segmentation, achieving state-of-the-art results on both of them.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes HuPBA; ISE; 600.098; 600.119 Approved no
Call Number Admin @ si @ GGM2017 Serial 2932
Permanent link to this record
 

 
Author H. Martin Kjer; Jens Fagertun; Sergio Vera; Debora Gil
Title Medial structure generation for registration of anatomical structures Type Book Chapter
Year 2017 Publication Skeletonization, Theory, Methods and Applications Abbreviated Journal
Volume 11 Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes IAM; 600.096; 600.075; 600.145 Approved no
Call Number Admin @ si @ MFV2017a Serial 2935
Permanent link to this record
 

 
Author Antonio Lopez; Atsushi Imiya; Tomas Pajdla; Jose Manuel Alvarez
Title Computer Vision in Vehicle Technology: Land, Sea & Air Type Book Whole
Year 2017 Publication Abbreviated Journal
Volume Issue Pages 161-163
Keywords
Abstract Summary This chapter examines different vision-based commercial solutions for real-live problems related to vehicles. It is worth mentioning the recent astonishing performance of deep convolutional neural networks (DCNNs) in difficult visual tasks such as image classification, object recognition/localization/detection, and semantic segmentation. In fact,
different DCNN architectures are already being explored for low-level tasks such as optical flow and disparity computation, and higher level ones such as place recognition.
Address
Corporate Author Thesis
Publisher John Wiley & Sons, Ltd Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-118-86807-2 Medium
Area Expedition Conference (up)
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ LIP2017a Serial 2937
Permanent link to this record
 

 
Author David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville
Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Journal Article
Year 2017 Publication Journal of Healthcare Engineering Abbreviated Journal JHCE
Volume Issue Pages 2040-2295
Keywords Colonoscopy images; Deep Learning; Semantic Segmentation
Abstract Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no
Call Number VBS2017b Serial 2940
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Debora Gil; Elena Carreño; Susana Padrones; Samantha Aso; Vanesa Vicens; Noelia Cubero de Frutos; Rosa Lopez Lisbona; Carles Sanchez; Agnes Borras; Antoni Rosell
Title Positive Airway Pressure-Enhanced CT to Improve Virtual Bronchoscopic Navigation Type Journal Article
Year 2017 Publication European Respiratory Journal Abbreviated Journal ERJ
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes IAM Approved no
Call Number Admin @ si @ DGC2017b Serial 3632
Permanent link to this record
 

 
Author Jorge Bernal; Nima Tajkbaksh; F. Javier Sanchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjorn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Cordova; Cristina Sanchez Montes; Suryakanth R. Gurudu; Gloria Fernandez Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace
Title Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results from the MICCAI 2015 Endoscopic Vision Challenge Type Journal Article
Year 2017 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI
Volume 36 Issue 6 Pages 1231 - 1249
Keywords Endoscopic vision; Polyp Detection; Handcrafted features; Machine Learning; Validation Framework
Abstract Colonoscopy is the gold standard for colon cancer screening though still some polyps are missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack
of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection subchallenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted
Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks (CNNs) are the state of the art. Nevertheless it is also demonstrated that combining different methodologies can lead to an improved overall performance.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (up)
Notes MV; 600.096; 600.075 Approved no
Call Number Admin @ si @ BTS2017 Serial 2949
Permanent link to this record