toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Sergio Escalera; Junior Fabian; Pablo Pardo; Xavier Baro; Jordi Gonzalez; Hugo Jair Escalante; Marc Oliu; Dusan Misevic; Ulrich Steiner; Isabelle Guyon edit   pdf
url  doi
openurl 
  Title ChaLearn Looking at People 2015: Apparent Age and Cultural Event Recognition Datasets and Results Type (up) Conference Article
  Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 243 - 251  
  Keywords  
  Abstract Following previous series on Looking at People (LAP) competitions [14, 13, 11, 12, 2], in 2015 ChaLearn ran two new competitions within the field of Looking at People: (1) age estimation, and (2) cultural event recognition, both in
still images. We developed a crowd-sourcing application to collect and label data about the apparent age of people (as opposed to the real age). In terms of cultural event recognition, one hundred categories had to be recognized. These
tasks involved scene understanding and human body analysis. This paper summarizes both challenges and data, as well as the results achieved by the participants of the competition.
 
  Address Santiago de Chile; December 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes ISE; 600.063; 600.078;MV Approved no  
  Call Number Admin @ si @ EFP2015 Serial 2704  
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny edit   pdf
url  doi
openurl 
  Title Query by String word spotting based on character bi-gram indexing Type (up) Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 881-885  
  Keywords  
  Abstract In this paper we propose a segmentation-free query by string word spotting method. Both the documents and query strings are encoded using a recently proposed word representa- tion that projects images and strings into a common atribute space based on a pyramidal histogram of characters(PHOC). These attribute models are learned using linear SVMs over the Fisher Vector representation of the images along with the PHOC labels of the corresponding strings. In order to search through the whole page, document regions are indexed per character bi- gram using a similar attribute representation. On top of that, we propose an integral image representation of the document using a simplified version of the attribute model for efficient computation. Finally we introduce a re-ranking step in order to boost retrieval performance. We show state-of-the-art results for segmentation-free query by string word spotting in single-writer and multi-writer standard datasets  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ GhV2015a Serial 2715  
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny edit   pdf
doi  isbn
openurl 
  Title A Sliding Window Framework for Word Spotting Based on Word Attributes Type (up) Conference Article
  Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 652-661  
  Keywords Word spotting; Sliding window; Word attributes  
  Abstract In this paper we propose a segmentation-free approach to word spotting. Word images are first encoded into feature vectors using Fisher Vector. Then, these feature vectors are used together with pyramidal histogram of characters labels (PHOC) to learn SVM-based attribute models. Documents are represented by these PHOC based word attributes. To efficiently compute the word attributes over a sliding window, we propose to use an integral image representation of the document using a simplified version of the attribute model. Finally we re-rank the top word candidates using the more discriminative full version of the word attributes. We show state-of-the-art results for segmentation-free query-by-example word spotting in single-writer and multi-writer standard datasets.  
  Address Santiago de Compostela; June 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium  
  Area Expedition Conference IbPRIA  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ GhV2015b Serial 2716  
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas edit   pdf
url  doi
openurl 
  Title Evaluating Real-Time Mirroring of Head Gestures using Smart Glasses Type (up) Conference Article
  Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 452-460  
  Keywords  
  Abstract Mirroring occurs when one person tends to mimic the non-verbal communication of their counterparts. Even though mirroring is a complex phenomenon, in this study, we focus on the detection of head-nodding as a simple non-verbal communication cue due to its significance as a gesture displayed during social interactions. This paper introduces a computer vision-based method to detect mirroring through the analysis of head gestures using wearable cameras (smart glasses). In addition, we study how such a method can be used to explore perceived competence. The proposed method has been evaluated and the experiments demonstrate how static and wearable cameras seem to be equally effective to gather the information required for the analysis.  
  Address Santiago de Chile; December 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes OR; 600.068; 600.072;MV Approved no  
  Call Number Admin @ si @ TRM2015 Serial 2722  
Permanent link to this record
 

 
Author M. Campos-Taberner; Adriana Romero; Carlo Gatta; Gustavo Camps-Valls edit  url
doi  openurl
  Title Shared feature representations of LiDAR and optical images: Trading sparsity for semantic discrimination Type (up) Conference Article
  Year 2015 Publication IEEE International Geoscience and Remote Sensing Symposium IGARSS2015 Abbreviated Journal  
  Volume Issue Pages 4169 - 4172  
  Keywords  
  Abstract This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime spar-sity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB+LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive colored edge filters. The joint feature representation is also more discriminative when used for clustering and topological data visualization.  
  Address Milan; Italy; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IGARSS  
  Notes LAMP; 600.079;MILAB Approved no  
  Call Number Admin @ si @ CRG2015 Serial 2724  
Permanent link to this record
 

 
Author R. Bertrand; Oriol Ramos Terrades; P. Gomez-Kramer; P. Franco; Jean-Marc Ogier edit  doi
openurl 
  Title A Conditional Random Field model for font forgery detection Type (up) Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 576 - 580  
  Keywords  
  Abstract Nowadays, document forgery is becoming a real issue. A large amount of documents that contain critical information as payment slips, invoices or contracts, are constantly subject to fraudster manipulation because of the lack of security regarding this kind of document. Previously, a system to detect fraudulent documents based on its intrinsic features has been presented. It was especially designed to retrieve copy-move forgery and imperfection due to fraudster manipulation. However, when a set of characters is not present in the original document, copy-move forgery is not feasible. Hence, the fraudster will use a text toolbox to add or modify information in the document by imitating the font or he will cut and paste characters from another document where the font properties are similar. This often results in font type errors. Thus, a clue to detect document forgery consists of finding characters, words or sentences in a document with font properties different from their surroundings. To this end, we present in this paper an automatic forgery detection method based on document font features. Using the Conditional Random Field a measurement of probability that a character belongs to a specific font is made by comparing the character font features to a knowledge database. Then, the character is classified as a genuine or a fake one by comparing its probability to belong to a certain font type with those of the neighboring characters.  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ BRG2015 Serial 2725  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Josep Llados; David Fernandez; Cristina Cañero edit  doi
openurl 
  Title Use case visual Bag-of-Words techniques for camera based identity document classification Type (up) Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 721 - 725  
  Keywords  
  Abstract Nowadays, automatic identity document recognition, including passport and driving license recognition, is at the core of many applications within the administrative and service sectors, such as police, hospitality, car renting, etc. In former years, the document information was manually extracted whereas today this data is recognized automatically from images obtained by flat-bed scanners. Yet, since these scanners tend to be expensive and voluminous, companies in the sector have recently turned their attention to cheaper, small and yet computationally powerful scanners: the mobile devices. The document identity recognition from mobile images enclose several new difficulties w.r.t traditional scanned images, such as the loss of a controlled background, perspective, blurring, etc. In this paper we present a real application for identity document classification of images taken from mobile devices. This classification process is of extreme importance since a prior knowledge of the document type and origin strongly facilitates the subsequent information extraction. The proposed method is based on a traditional Bagof-Words in which we have taken into consideration several key aspects to enhance recognition rate. The method performance has been studied on three datasets containing more than 2000 images from 129 different document classes.  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077; 600.061; Approved no  
  Call Number Admin @ si @ HRL2015a Serial 2726  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Josep Llados edit  url
doi  openurl
  Title Attributed Graph Grammar for floor plan analysis Type (up) Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 726 - 730  
  Keywords  
  Abstract In this paper, we propose the use of an Attributed Graph Grammar as unique framework to model and recognize the structure of floor plans. This grammar represents a building as a hierarchical composition of structurally and semantically related elements, where common representations are learned stochastically from annotated data. Given an input image, the parsing consists on constructing that graph representation that better agrees with the probabilistic model defined by the grammar. The proposed method provides several advantages with respect to the traditional floor plan analysis techniques. It uses an unsupervised statistical approach for detecting walls that adapts to different graphical notations and relaxes strong structural assumptions such are straightness and orthogonality. Moreover, the independence between the knowledge model and the parsing implementation allows the method to learn automatically different building configurations and thus, to cope the existing variability. These advantages are clearly demonstrated by comparing it with the most recent floor plan interpretation techniques on 4 datasets of real floor plans with different notations.  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077; 600.061 Approved no  
  Call Number Admin @ si @ HRL2015b Serial 2727  
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Krystian Mikolajczyk; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Hierarchical online domain adaptation of deformable part-based models Type (up) Conference Article
  Year 2016 Publication IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 5536-5541  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract We propose an online domain adaptation method for the deformable part-based model (DPM). The online domain adaptation is based on a two-level hierarchical adaptation tree, which consists of instance detectors in the leaf nodes and a category detector at the root node. Moreover, combined with a multiple object tracking procedure (MOT), our proposal neither requires target-domain annotated data nor revisiting the source-domain data for performing the source-to-target domain adaptation of the DPM. From a practical point of view this means that, given a source-domain DPM and new video for training on a new domain without object annotations, our procedure outputs a new DPM adapted to the domain represented by the video. As proof-of-concept we apply our proposal to the challenging task of pedestrian detection. In this case, each instance detector is an exemplar classifier trained online with only one pedestrian per frame. The pedestrian instances are collected by MOT and the hierarchical model is constructed dynamically according to the pedestrian trajectories. Our experimental results show that the adapted detector achieves the accuracy of recent supervised domain adaptation methods (i.e., requiring manually annotated targetdomain data), and improves the source detector more than 10 percentage points.  
  Address Stockholm; Sweden; May 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICRA  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number Admin @ si @ XVM2016 Serial 2728  
Permanent link to this record
 

 
Author Carlos David Martinez Hinarejos; Josep Llados; Alicia Fornes; Francisco Casacuberta; Lluis de Las Heras; Joan Mas; Moises Pastor; Oriol Ramos Terrades; Joan Andreu Sanchez; Enrique Vidal; Fernando Vilariño edit   pdf
openurl 
  Title Context, multimodality, and user collaboration in handwritten text processing: the CoMUN-HaT project Type (up) Conference Article
  Year 2016 Publication 3rd IberSPEECH Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Processing of handwritten documents is a task that is of wide interest for many
purposes, such as those related to preserve cultural heritage. Handwritten text recognition techniques have been successfully applied during the last decade to obtain transcriptions of handwritten documents, and keyword spotting techniques have been applied for searching specific terms in image collections of handwritten documents. However, results on transcription and indexing are far from perfect. In this framework, the use of new data sources arises as a new paradigm that will allow for a better transcription and indexing of handwritten documents. Three main different data sources could be considered: context of the document (style, writer, historical time, topics,. . . ), multimodal data (representations of the document in a different modality, such as the speech signal of the dictation of the text), and user feedback (corrections, amendments,. . . ). The CoMUN-HaT project aims at the integration of these different data sources into the transcription and indexing task for handwritten documents: the use of context derived from the analysis of the documents, how multimodality can aid the recognition process to obtain more accurate transcriptions (including transcription in a modern version of the language), and integration into a userin-the-loop assisted text transcription framework. This will be reflected in the construction of a transcription and indexing platform that can be used by both professional and nonprofessional users, contributing to crowd-sourcing activities to preserve cultural heritage and to obtain an accessible version of the involved corpus.
 
  Address Lisboa; Portugal; November 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IberSPEECH  
  Notes DAG; MV; 600.097;SIAI Approved no  
  Call Number Admin @ si @MLF2016 Serial 2813  
Permanent link to this record
 

 
Author Gloria Fernandez Esparrach; Jorge Bernal; Cristina Rodriguez de Miguel; Debora Gil; Fernando Vilariño; Henry Cordova; Cristina Sanchez Montes; I.Araujo ; Maria Lopez Ceron; J.Llach; F. Javier Sanchez edit   pdf
url  openurl
  Title Colonic polyps are correctly identified by a computer vision method using wm-dova energy maps Type (up) Conference Article
  Year 2015 Publication Proceedings of 23 United European- UEG Week 2015 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference UEG  
  Notes MV; IAM; 600.075;SIAI Approved no  
  Call Number Admin @ si @ FBR2015 Serial 2732  
Permanent link to this record
 

 
Author Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich edit  doi
isbn  openurl
  Title DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition Type (up) Conference Article
  Year 2015 Publication Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II Abbreviated Journal  
  Volume 9475 Issue Pages 463-473  
  Keywords Projector-camera systems; Feature descriptors; Object recognition  
  Abstract Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection.  
  Address  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-27862-9 Medium  
  Area Expedition Conference ISVC  
  Notes CIC Approved no  
  Call Number Admin @ si @ SMG2015 Serial 2736  
Permanent link to this record
 

 
Author Victor Campmany; Sergio Silva; Juan Carlos Moure; Toni Espinosa; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title GPU-based pedestrian detection for autonomous driving Type (up) Conference Article
  Year 2016 Publication GPU Technology Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Pedestrian Detection; GPU  
  Abstract Pedestrian detection for autonomous driving is one of the hardest tasks within computer vision, and involves huge computational costs. Obtaining acceptable real-time performance, measured in frames per second (fps), for the most advanced algorithms is nowadays a hard challenge. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system that includes LBP and HOG as feature descriptors and SVM and Random forest as classifiers. We introduce significant algorithmic adjustments and optimizations to adapt the problem to the NVIDIA GPU architecture. The aim is to deploy a real-time system providing reliable results.  
  Address Silicon Valley; San Francisco; USA; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GTC  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ CSM2016 Serial 2737  
Permanent link to this record
 

 
Author Daniel Hernandez; Juan Carlos Moure; Toni Espinosa; Alejandro Chacon; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title Real-time 3D Reconstruction for Autonomous Driving via Semi-Global Matching Type (up) Conference Article
  Year 2016 Publication GPU Technology Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Stereo; Autonomous Driving; GPU; 3d reconstruction  
  Abstract Robust and dense computation of depth information from stereo-camera systems is a computationally demanding requirement for real-time autonomous driving. Semi-Global Matching (SGM) [1] approximates heavy-computation global algorithms results but with lower computational complexity, therefore it is a good candidate for a real-time implementation. SGM minimizes energy along several 1D paths across the image. The aim of this work is to provide a real-time system producing reliable results on energy-efficient hardware. Our design runs on a NVIDIA Titan X GPU at 104.62 FPS and on a NVIDIA Drive PX at 6.7 FPS, promising for real-time platforms  
  Address Silicon Valley; San Francisco; USA; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GTC  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ HME2016 Serial 2738  
Permanent link to this record
 

 
Author German Ros; Laura Sellart; Joanna Materzynska; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes Type (up) Conference Article
  Year 2016 Publication 29th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3234-3243  
  Keywords Domain Adaptation; Autonomous Driving; Virtual Data; Semantic Segmentation  
  Abstract Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task  
  Address Las Vegas; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ RSM2016 Serial 2739  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: