|   | 
Details
   web
Records
Author Juan Ignacio Toledo; Alicia Fornes; Jordi Cucurull; Josep Llados
Title Election Tally Sheets Processing System Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 364-368
Keywords
Abstract In paper based elections, manual tallies at polling station level produce myriads of documents. These documents share a common form-like structure and a reduced vocabulary worldwide. On the other hand, each tally sheet is filled by a different writer and on different countries, different scripts are used. We present a complete document analysis system for electoral tally sheet processing combining state of the art techniques with a new handwriting recognition subprocess based on unsupervised feature discovery with Variational Autoencoders and sequence classification with BLSTM neural networks. The whole system is designed to be script independent and allows a fast and reliable results consolidation process with reduced operational cost.
Address (down) Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 602.006; 600.061; 601.225; 600.077; 600.097 Approved no
Call Number TFC2016 Serial 2752
Permanent link to this record
 

 
Author Anders Hast; Alicia Fornes
Title A Segmentation-free Handwritten Word Spotting Approach by Relaxed Feature Matching Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 150-155
Keywords
Abstract The automatic recognition of historical handwritten documents is still considered challenging task. For this reason, word spotting emerges as a good alternative for making the information contained in these documents available to the user. Word spotting is defined as the task of retrieving all instances of the query word in a document collection, becoming a useful tool for information retrieval. In this paper we propose a segmentation-free word spotting approach able to deal with large document collections. Our method is inspired on feature matching algorithms that have been applied to image matching and retrieval. Since handwritten words have different shape, there is no exact transformation to be obtained. However, the sufficient degree of relaxation is achieved by using a Fourier based descriptor and an alternative approach to RANSAC called PUMA. The proposed approach is evaluated on historical marriage records, achieving promising results.
Address (down) Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 602.006; 600.061; 600.077; 600.097 Approved no
Call Number HaF2016 Serial 2753
Permanent link to this record
 

 
Author Dimosthenis Karatzas; V. Poulain d'Andecy; Marçal Rusiñol
Title Human-Document Interaction – a new frontier for document image analysis Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 369-374
Keywords
Abstract All indications show that paper documents will not cede in favour of their digital counterparts, but will instead be used increasingly in conjunction with digital information. An open challenge is how to seamlessly link the physical with the digital – how to continue taking advantage of the important affordances of paper, without missing out on digital functionality. This paper
presents the authors’ experience with developing systems for Human-Document Interaction based on augmented document interfaces and examines new challenges and opportunities arising for the document image analysis field in this area. The system presented combines state of the art camera-based document
image analysis techniques with a range of complementary tech-nologies to offer fluid Human-Document Interaction. Both fixed and nomadic setups are discussed that have gone through user testing in real-life environments, and use cases are presented that span the spectrum from business to educational application
Address (down) Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.084; 600.077 Approved no
Call Number KPR2016 Serial 2756
Permanent link to this record
 

 
Author Q. Bao; Marçal Rusiñol; M.Coustaty; Muhammad Muzzamil Luqman; C.D. Tran; Jean-Marc Ogier
Title Delaunay triangulation-based features for Camera-based document image retrieval system Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 1-6
Keywords Camera-based Document Image Retrieval; Delaunay Triangulation; Feature descriptors; Indexing
Abstract In this paper, we propose a new feature vector, named DElaunay TRIangulation-based Features (DETRIF), for real-time camera-based document image retrieval. DETRIF is computed based on the geometrical constraints from each pair of adjacency triangles in delaunay triangulation which is constructed from centroids of connected components. Besides, we employ a hashing-based indexing system in order to evaluate the performance of DETRIF and to compare it with other systems such as LLAH and SRIF. The experimentation is carried out on two datasets comprising of 400 heterogeneous-content complex linguistic map images (huge size, 9800 X 11768 pixels resolution)and 700 textual document images.
Address (down) Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.061; 600.084; 600.077 Approved no
Call Number Admin @ si @ BRC2016 Serial 2757
Permanent link to this record
 

 
Author Lluis Gomez; Dimosthenis Karatzas
Title A fine-grained approach to scene text script identification Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 192-197
Keywords
Abstract This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images. We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a fine-grained classification framework. In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online.
Address (down) Santorini; Grecia; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 601.197; 600.084 Approved no
Call Number Admin @ si @ GoK2016b Serial 2863
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell
Title Color spaces emerging from deep convolutional networks Type Conference Article
Year 2016 Publication 24th Color and Imaging Conference Abbreviated Journal
Volume Issue Pages 225-230
Keywords
Abstract Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers.
Address (down) San Diego; USA; November 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CIC
Notes CIC Approved no
Call Number Admin @ si @ RaV2016a Serial 2894
Permanent link to this record
 

 
Author Daniel Hernandez; Alejandro Chacon; Antonio Espinosa; David Vazquez; Juan Carlos Moure; Antonio Lopez
Title Embedded real-time stereo estimation via Semi-Global Matching on the GPU Type Conference Article
Year 2016 Publication 16th International Conference on Computational Science Abbreviated Journal
Volume 80 Issue Pages 143-153
Keywords Autonomous Driving; Stereo; CUDA; 3d reconstruction
Abstract Dense, robust and real-time computation of depth information from stereo-camera systems is a computationally demanding requirement for robotics, advanced driver assistance systems (ADAS) and autonomous vehicles. Semi-Global Matching (SGM) is a widely used algorithm that propagates consistency constraints along several paths across the image. This work presents a real-time system producing reliable disparity estimation results on the new embedded energy-efficient GPU devices. Our design runs on a Tegra X1 at 41 frames per second for an image size of 640x480, 128 disparity levels, and using 4 path directions for the SGM method.
Address (down) San Diego; CA; USA; June 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCS
Notes ADAS; 600.085; 600.082; 600.076 Approved no
Call Number ADAS @ adas @ HCE2016a Serial 2740
Permanent link to this record
 

 
Author Victor Campmany; Sergio Silva; Antonio Espinosa; Juan Carlos Moure; David Vazquez; Antonio Lopez
Title GPU-based pedestrian detection for autonomous driving Type Conference Article
Year 2016 Publication 16th International Conference on Computational Science Abbreviated Journal
Volume 80 Issue Pages 2377-2381
Keywords Pedestrian detection; Autonomous Driving; CUDA
Abstract We propose a real-time pedestrian detection system for the embedded Nvidia Tegra X1 GPU-CPU hybrid platform. The pipeline is composed by the following state-of-the-art algorithms: Histogram of Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) features extracted from the input image; Pyramidal Sliding Window technique for foreground segmentation; and Support Vector Machine (SVM) for classification. Results show a 8x speedup in the target Tegra X1 platform and a better performance/watt ratio than desktop CUDA platforms in study.
Address (down) San Diego; CA; USA; June 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCS
Notes ADAS; 600.085; 600.082; 600.076 Approved no
Call Number ADAS @ adas @ CSE2016 Serial 2741
Permanent link to this record
 

 
Author Albert Berenguel; Oriol Ramos Terrades; Josep Llados; Cristina Cañero
Title Banknote counterfeit detection through background texture printing analysis Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This paper is focused on the detection of counterfeit photocopy banknotes. The main difficulty is to work on a real industrial scenario without any constraint about the acquisition device and with a single image. The main contributions of this paper are twofold: first the adaptation and performance evaluation of existing approaches to classify the genuine and photocopy banknotes using background texture printing analysis, which have not been applied into this context before. Second, a new dataset of Euro banknotes images acquired with several cameras under different luminance conditions to evaluate these methods. Experiments on the proposed algorithms show that mixing SIFT features and sparse coding dictionaries achieves quasi perfect classification using a linear SVM with the created dataset. Approaches using dictionaries to cover all possible texture variations have demonstrated to be robust and outperform the state-of-the-art methods using the proposed benchmark.
Address (down) Rumania; May 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.061; 601.269; 600.097 Approved no
Call Number Admin @ si @ BRL2016 Serial 2950
Permanent link to this record
 

 
Author Carles Sanchez; Debora Gil; T. Gache; N. Koufos; Marta Diez-Ferrer; Antoni Rosell
Title SENSA: a System for Endoscopic Stenosis Assessment Type Conference Article
Year 2016 Publication 28th Conference of the international Society for Medical Innovation and Technology Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Documenting the severity of a static or dynamic Central Airway Obstruction (CAO) is crucial to establish proper diagnosis and treatment, predict possible treatment effects and better follow-up the patients. The subjective visual evaluation of a stenosis during video-bronchoscopy still remains the most common way to assess a CAO in spite of a consensus among experts for a need to standardize all calculations [1].
The Computer Vision Center in cooperation with the «Hospital de Bellvitge», has developed a System for Endoscopic Stenosis Assessment (SENSA), which computes CAO directly by analyzing standard bronchoscopic data without the need of using other imaging tecnologies.
Address (down) Rotterdam; The Netherlands; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SMIT
Notes IAM; Approved no
Call Number Admin @ si @ SGG2016 Serial 2942
Permanent link to this record
 

 
Author Carles Sanchez; Debora Gil; Jorge Bernal; F. Javier Sanchez; Marta Diez-Ferrer; Antoni Rosell
Title Navigation Path Retrieval from Videobronchoscopy using Bronchial Branches Type Conference Article
Year 2016 Publication 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshops Abbreviated Journal
Volume 9401 Issue Pages 62-70
Keywords Bronchoscopy navigation; Lumen center; Brochial branches; Navigation path; Videobronchoscopy
Abstract Bronchoscopy biopsy can be used to diagnose lung cancer without risking complications of other interventions like transthoracic needle aspiration. During bronchoscopy, the clinician has to navigate through the bronchial tree to the target lesion. A main drawback is the difficulty to check whether the exploration is following the correct path. The usual guidance using fluoroscopy implies repeated radiation of the clinician, while alternative systems (like electromagnetic navigation) require specific equipment that increases intervention costs. We propose to compute the navigated path using anatomical landmarks extracted from the sole analysis of videobronchoscopy images. Such landmarks allow matching the current exploration to the path previously planned on a CT to indicate clinician whether the planning is being correctly followed or not. We present a feasibility study of our landmark based CT-video matching using bronchoscopic videos simulated on a virtual bronchoscopy interactive interface.
Address (down) Quebec; Canada; September 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MICCAIW
Notes IAM; MV; 600.060; 600.075 Approved no
Call Number Admin @ si @ SGB2016 Serial 2885
Permanent link to this record
 

 
Author Marc Masana; Joost Van de Weijer; Andrew Bagdanov
Title On-the-fly Network pruning for object detection Type Conference Article
Year 2016 Publication International conference on learning representations Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Object detection with deep neural networks is often performed by passing a few
thousand candidate bounding boxes through a deep neural network for each image.
These bounding boxes are highly correlated since they originate from the same
image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result.
Address (down) Puerto Rico; May 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICLR
Notes LAMP; 600.068; 600.106; 600.079 Approved no
Call Number Admin @ si @MWB2016 Serial 2758
Permanent link to this record
 

 
Author Esteve Cervantes; Long Long Yu; Andrew Bagdanov; Marc Masana; Joost Van de Weijer
Title Hierarchical Part Detection with Deep Neural Networks Type Conference Article
Year 2016 Publication 23rd IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages
Keywords Object Recognition; Part Detection; Convolutional Neural Networks
Abstract Part detection is an important aspect of object recognition. Most approaches apply object proposals to generate hundreds of possible part bounding box candidates which are then evaluated by part classifiers. Recently several methods have investigated directly regressing to a limited set of bounding boxes from deep neural network representation. However, for object parts such methods may be unfeasible due to their relatively small size with respect to the image. We propose a hierarchical method for object and part detection. In a single network we first detect the object and then regress to part location proposals based only on the feature representation inside the object. Experiments show that our hierarchical approach outperforms a network which directly regresses the part locations. We also show that our approach obtains part detection accuracy comparable or better than state-of-the-art on the CUB-200 bird and Fashionista clothing item datasets with only a fraction of the number of part proposals.
Address (down) Phoenix; Arizona; USA; September 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes LAMP; 600.106 Approved no
Call Number Admin @ si @ CLB2016 Serial 2762
Permanent link to this record
 

 
Author Pejman Rasti; Tonis Uiboupin; Sergio Escalera; Gholamreza Anbarjafari
Title Convolutional Neural Network Super Resolution for Face Recognition in Surveillance Monitoring Type Conference Article
Year 2016 Publication 9th Conference on Articulated Motion and Deformable Objects Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (down) Palma de Mallorca; Spain; July 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AMDO
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ RUE2016 Serial 2846
Permanent link to this record
 

 
Author Dennis H. Lundtoft; Kamal Nasrollahi; Thomas B. Moeslund; Sergio Escalera
Title Spatiotemporal Facial Super-Pixels for Pain Detection Type Conference Article
Year 2016 Publication 9th Conference on Articulated Motion and Deformable Objects Abbreviated Journal
Volume Issue Pages
Keywords Facial images; Super-pixels; Spatiotemporal filters; Pain detection
Abstract Best student paper award.
Pain detection using facial images is of critical importance in many Health applications. Since pain is a spatiotemporal process, recent works on this topic employ facial spatiotemporal features to detect pain. These systems extract such features from the entire area of the face. In this paper, we show that by employing super-pixels we can divide the face into three regions, in a way that only one of these regions (about one third of the face) contributes to the pain estimation and the other two regions can be discarded. The experimental results on the UNBCMcMaster database show that the proposed system using this single region outperforms state-of-the-art systems in detecting no-pain scenarios, while it reaches comparable results in detecting weak and severe pain scenarios.
Address (down) Palma de Mallorca; Spain; July 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AMDO
Notes HUPBA;MILAB Approved no
Call Number Admin @ si @ LNM2016 Serial 2847
Permanent link to this record