|   | 
Details
   web
Records
Author Veronica Romero; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez
Title Information Extraction in Handwritten Marriage Licenses Books Using the MGGI Methodology Type Conference Article
Year (down) 2017 Publication 8th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 10255 Issue Pages 287-294
Keywords Handwritten Text Recognition; Information extraction; Language modeling; MGGI; Categories-based language model
Abstract Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demographic and genealogical research. For example, marriage license books have been used for centuries by ecclesiastical and secular institutions to register marriages. These books follow a simple structure of the text in the records with a evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. In previous works we studied the use of category-based language models and how a Grammatical Inference technique known as MGGI could improve the accuracy of these tasks. In this work we analyze the main causes of the semantic errors observed in previous results and apply a better implementation of the MGGI technique to solve these problems. Using the resulting language model, transcription and information extraction experiments have been carried out, and the results support our proposed approach.
Address Faro; Portugal; June 2017
Corporate Author Thesis
Publisher Place of Publication Editor L.A. Alexandre; J.Salvador Sanchez; Joao M. F. Rodriguez
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-319-58837-7 Medium
Area Expedition Conference IbPRIA
Notes DAG; 602.006; 600.097; 600.121 Approved no
Call Number Admin @ si @ RFV2017 Serial 2952
Permanent link to this record
 

 
Author Antonio Lopez; Jiaolong Xu; Jose Luis Gomez; David Vazquez; German Ros
Title From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example Type Book Chapter
Year (down) 2017 Publication Domain Adaptation in Computer Vision Applications Abbreviated Journal
Volume Issue 13 Pages 243-258
Keywords Domain Adaptation
Abstract Supervised learning tends to produce more accurate classifiers than unsupervised learning in general. This implies that training data is preferred with annotations. When addressing visual perception challenges, such as localizing certain object classes within an image, the learning of the involved classifiers turns out to be a practical bottleneck. The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. A priori, the more complex the model is regarding its number of parameters, the more annotated examples are required. This annotation task is performed by human oracles, which ends up in inaccuracies and errors in the annotations (aka ground truth) since the task is inherently very cumbersome and sometimes ambiguous. As an alternative we have pioneered the use of virtual worlds for collecting such annotations automatically and with high precision. However, since the models learned with virtual data must operate in the real world, we still need to perform domain adaptation (DA). In this chapter we revisit the DA of a deformable part-based model (DPM) as an exemplifying case of virtual- to-real-world DA. As a use case, we address the challenge of vehicle detection for driver assistance, using different publicly available virtual-world data. While doing so, we investigate questions such as: how does the domain gap behave due to virtual-vs-real data with respect to dominant object appearance per domain, as well as the role of photo-realism in the virtual world.
Address
Corporate Author Thesis
Publisher Springer Place of Publication Editor Gabriela Csurka
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.085; 601.223; 600.076; 600.118 Approved no
Call Number ADAS @ adas @ LXG2017 Serial 2872
Permanent link to this record
 

 
Author Pau Riba; Josep Llados; Alicia Fornes; Anjan Dutta
Title Large-scale graph indexing using binary embeddings of node contexts for information spotting in document image databases Type Journal Article
Year (down) 2017 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 87 Issue Pages 203-211
Keywords
Abstract Graph-based representations are experiencing a growing usage in visual recognition and retrieval due to their representational power in front of classical appearance-based representations. However, retrieving a query graph from a large dataset of graphs implies a high computational complexity. The most important property for a large-scale retrieval is the search time complexity to be sub-linear in the number of database examples. With this aim, in this paper we propose a graph indexation formalism applied to visual retrieval. A binary embedding is defined as hashing keys for graph nodes. Given a database of labeled graphs, graph nodes are complemented with vectors of attributes representing their local context. Then, each attribute vector is converted to a binary code applying a binary-valued hash function. Therefore, graph retrieval is formulated in terms of finding target graphs in the database whose nodes have a small Hamming distance from the query nodes, easily computed with bitwise logical operators. As an application example, we validate the performance of the proposed methods in different real scenarios such as handwritten word spotting in images of historical documents or symbol spotting in architectural floor plans.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.097; 602.006; 603.053; 600.121 Approved no
Call Number RLF2017b Serial 2873
Permanent link to this record
 

 
Author Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan Carlos Moure
Title Embedded Real-time Stixel Computation Type Conference Article
Year (down) 2017 Publication GPU Technology Conference Abbreviated Journal
Volume Issue Pages
Keywords GPU; CUDA; Stixels; Autonomous Driving
Abstract
Address Silicon Valley; USA; May 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference GTC
Notes ADAS; 600.118 Approved no
Call Number ADAS @ adas @ HEV2017a Serial 2879
Permanent link to this record
 

 
Author David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville
Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Conference Article
Year (down) 2017 Publication 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery Abbreviated Journal
Volume Issue Pages
Keywords Deep Learning; Medical Imaging
Abstract Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss-rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. We provide new baselines on this dataset by training standard fully convolutional networks (FCN) for semantic segmentation and significantly outperforming, without any further post-processing, prior results in endoluminal scene segmentation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CARS
Notes ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no
Call Number ADAS @ adas @ VBS2017a Serial 2880
Permanent link to this record
 

 
Author David Geronimo; David Vazquez; Arturo de la Escalera
Title Vision-Based Advanced Driver Assistance Systems Type Book Chapter
Year (down) 2017 Publication Computer Vision in Vehicle Technology: Land, Sea, and Air Abbreviated Journal
Volume Issue Pages
Keywords ADAS; Autonomous Driving
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number ADAS @ adas @ GVE2017 Serial 2881
Permanent link to this record
 

 
Author German Ros; Laura Sellart; Gabriel Villalonga; Elias Maidanik; Francisco Molero; Marc Garcia; Adriana Cedeño; Francisco Perez; Didier Ramirez; Eduardo Escobar; Jose Luis Gomez; David Vazquez; Antonio Lopez
Title Semantic Segmentation of Urban Scenes via Domain Adaptation of SYNTHIA Type Book Chapter
Year (down) 2017 Publication Domain Adaptation in Computer Vision Applications Abbreviated Journal
Volume 12 Issue Pages 227-241
Keywords SYNTHIA; Virtual worlds; Autonomous Driving
Abstract Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images; thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this chapter, we propose to use a combination of a virtual world to automatically generate realistic synthetic images with pixel-level annotations, and domain adaptation to transfer the models learnt to correctly operate in real scenarios. We address the question of how useful synthetic data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations and object identifiers. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show that combining SYNTHIA with simple domain adaptation techniques in the training stage significantly improves performance on semantic segmentation.
Address
Corporate Author Thesis
Publisher Springer Place of Publication Editor Gabriela Csurka
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.085; 600.082; 600.076; 600.118 Approved no
Call Number ADAS @ adas @ RSV2017 Serial 2882
Permanent link to this record
 

 
Author Lluis Gomez; Dimosthenis Karatzas
Title TextProposals: a Text‐specific Selective Search Algorithm for Word Spotting in the Wild Type Journal Article
Year (down) 2017 Publication Pattern Recognition Abbreviated Journal PR
Volume 70 Issue Pages 60-74
Keywords
Abstract Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (Goel et al., 2013; Almazán et al., 2014; Jaderberg et al., 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way.

Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (Almazán et al., 2014; Jaderberg et al., 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10% F-score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https://github.com/lluisgomez/TextProposals.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.084; 601.197; 600.121; 600.129 Approved no
Call Number Admin @ si @ GoK2017 Serial 2886
Permanent link to this record
 

 
Author Lluis Gomez; Anguelos Nicolaou; Dimosthenis Karatzas
Title Improving patch‐based scene text script identification with ensembles of conjoined networks Type Journal Article
Year (down) 2017 Publication Pattern Recognition Abbreviated Journal PR
Volume 67 Issue Pages 85-96
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.084; 600.121; 600.129 Approved no
Call Number Admin @ si @ GNK2017 Serial 2887
Permanent link to this record
 

 
Author Lluis Gomez; Y. Patel; Marçal Rusiñol; C.V. Jawahar; Dimosthenis Karatzas
Title Self‐supervised learning of visual features through embedding images into text topic spaces Type Conference Article
Year (down) 2017 Publication 30th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.
Address Honolulu; Hawaii; July 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes DAG; 600.084; 600.121 Approved no
Call Number Admin @ si @ GPR2017 Serial 2889
Permanent link to this record
 

 
Author Ivet Rafegas; Javier Vazquez; Robert Benavente; Maria Vanrell; Susana Alvarez
Title Enhancing spatio-chromatic representation with more-than-three color coding for image description Type Journal Article
Year (down) 2017 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A
Volume 34 Issue 5 Pages 827-837
Keywords
Abstract Extraction of spatio-chromatic features from color images is usually performed independently on each color channel. Usual 3D color spaces, such as RGB, present a high inter-channel correlation for natural images. This correlation can be reduced using color-opponent representations, but the spatial structure of regions with small color differences is not fully captured in two generic Red-Green and Blue-Yellow channels. To overcome these problems, we propose a new color coding that is adapted to the specific content of each image. Our proposal is based on two steps: (a) setting the number of channels to the number of distinctive colors we find in each image (avoiding the problem of channel correlation), and (b) building a channel representation that maximizes contrast differences within each color channel (avoiding the problem of low local contrast). We call this approach more-than-three color coding (MTT) to enhance the fact that the number of channels is adapted to the image content. The higher color complexity an image has, the more channels can be used to represent it. Here we select distinctive colors as the most predominant in the image, which we call color pivots, and we build the new color coding using these color pivots as a basis. To evaluate the proposed approach we measure its efficiency in an image categorization task. We show how a generic descriptor improves its performance at the description level when applied on the MTT coding.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC; 600.087 Approved no
Call Number Admin @ si @ RVB2017 Serial 2892
Permanent link to this record
 

 
Author Marçal Rusiñol; Josep Llados
Title Flowchart Recognition in Patent Information Retrieval Type Book Chapter
Year (down) 2017 Publication Current Challenges in Patent Information Retrieval Abbreviated Journal
Volume 37 Issue Pages 351-368
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor M. Lupu; K. Mayer; N. Kando; A.J. Trippe
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.097; 600.121 Approved no
Call Number Admin @ si @ RuL2017 Serial 2896
Permanent link to this record
 

 
Author Victor Vaquero; German Ros; Francesc Moreno-Noguer; Antonio Lopez; Alberto Sanfeliu
Title Joint coarse-and-fine reasoning for deep optical flow Type Conference Article
Year (down) 2017 Publication 24th International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 2558-2562
Keywords
Abstract We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.
Address Beijing; China; September 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ VRM2017 Serial 2898
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; Angel Sappa; Cristhian Aguilera; Ricardo Toledo
Title Cross-Spectral Local Descriptors via Quadruplet Network Type Journal Article
Year (down) 2017 Publication Sensors Abbreviated Journal SENS
Volume 17 Issue 4 Pages 873
Keywords
Abstract This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.086; 600.118 Approved no
Call Number Admin @ si @ ASA2017 Serial 2914
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla
Title Cross-Spectral Image Patch Similarity using Convolutional Neural Network Type Conference Article
Year (down) 2017 Publication IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics Abbreviated Journal
Volume Issue Pages
Keywords
Abstract The ability to compare image regions (patches) has been the basis of many approaches to core computer vision problems, including object, texture and scene categorization. Hence, developing representations for image patches have been of interest in several works. The current work focuses on learning similarity between cross-spectral image patches with a 2 channel convolutional neural network (CNN) model. The proposed approach is an adaptation of a previous work, trying to obtain similar results than the state of the art but with a lowcost hardware. Hence, obtained results are compared with both
classical approaches, showing improvements, and a state of the art CNN based approach.
Address San Sebastian; Spain; May 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECMSM
Notes ADAS; 600.086; 600.118 Approved no
Call Number Admin @ si @ SSV2017a Serial 2916
Permanent link to this record