|   | 
Details
   web
Records
Author Anders Hast; Alicia Fornes
Title A Segmentation-free Handwritten Word Spotting Approach by Relaxed Feature Matching Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 150-155
Keywords
Abstract The automatic recognition of historical handwritten documents is still considered challenging task. For this reason, word spotting emerges as a good alternative for making the information contained in these documents available to the user. Word spotting is defined as the task of retrieving all instances of the query word in a document collection, becoming a useful tool for information retrieval. In this paper we propose a segmentation-free word spotting approach able to deal with large document collections. Our method is inspired on feature matching algorithms that have been applied to image matching and retrieval. Since handwritten words have different shape, there is no exact transformation to be obtained. However, the sufficient degree of relaxation is achieved by using a Fourier based descriptor and an alternative approach to RANSAC called PUMA. The proposed approach is evaluated on historical marriage records, achieving promising results.
Address Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 602.006; 600.061; 600.077; 600.097 Approved no
Call Number HaF2016 Serial 2753
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Zhijie Fang; Yainuvis Socarras; Joan Serrat; David Vazquez; Jiaolong Xu; Antonio Lopez
Title Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison Type Journal Article
Year 2016 Publication Sensors Abbreviated Journal SENS
Volume 16 Issue 6 Pages 820
Keywords Pedestrian Detection; FIR
Abstract Despite all the significant advances in pedestrian detection brought by computer vision for driving assistance, it is still a challenging problem. One reason is the extremely varying lighting conditions under which such a detector should operate, namely day and night time. Recent research has shown that the combination of visible and non-visible imaging modalities may increase detection accuracy, where the infrared spectrum plays a critical role. The goal of this paper is to assess the accuracy gain of different pedestrian models (holistic, part-based, patch-based) when training with images in the far infrared spectrum. Specifically, we want to compare detection accuracy on test images recorded at day and nighttime if trained (and tested) using (a) plain color images, (b) just infrared images and (c) both of them. In order to obtain results for the last item we propose an early fusion approach to combine features from both modalities. We base the evaluation on a new dataset we have built for this purpose as well as on the publicly available KAIST multispectral dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN 1424-8220 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.085; 600.076; 600.082; 601.281 Approved no
Call Number ADAS @ adas @ GFS2016 Serial 2754
Permanent link to this record
 

 
Author Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan Carlos Moure
Title GPU-accelerated real-time stixel computation Type Conference Article
Year 2017 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal
Volume Issue Pages 1054-1062
Keywords Autonomous Driving; GPU; Stixel
Abstract The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energyefficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024×440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.
Address Santa Rosa; CA; USA; March 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WACV
Notes ADAS; 600.118 Approved no
Call Number ADAS @ adas @ HEV2017b Serial 2812
Permanent link to this record
 

 
Author Marçal Rusiñol; J. Chazalon; Jean-Marc Ogier
Title Filtrage de descripteurs locaux pour l'amélioration de la détection de documents Type Conference Article
Year 2016 Publication Colloque International Francophone sur l'Écrit et le Document Abbreviated Journal
Volume Issue Pages
Keywords Local descriptors; mobile capture; document matching; keypoint selection
Abstract In this paper we propose an effective method aimed at reducing the amount of local descriptors to be indexed in a document matching framework.In an off-line training stage, the matching between the model document and incoming images is computed retaining the local descriptors from the model that steadily produce good matches. We have evaluated this approach by using the ICDAR2015 SmartDOC dataset containing near 25000 images from documents to be captured by a mobile device. We have tested the performance of this filtering step by using ORB and SIFT local detectors and descriptors. The results show an important gain both in quality of the final matching as well as in time and space requirements.
Address Toulouse; France; March 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CIFED
Notes DAG; 600.084; 600.077 Approved no
Call Number Admin @ si @ RCO2016 Serial 2755
Permanent link to this record
 

 
Author Dimosthenis Karatzas; V. Poulain d'Andecy; Marçal Rusiñol
Title Human-Document Interaction – a new frontier for document image analysis Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 369-374
Keywords
Abstract All indications show that paper documents will not cede in favour of their digital counterparts, but will instead be used increasingly in conjunction with digital information. An open challenge is how to seamlessly link the physical with the digital – how to continue taking advantage of the important affordances of paper, without missing out on digital functionality. This paper
presents the authors’ experience with developing systems for Human-Document Interaction based on augmented document interfaces and examines new challenges and opportunities arising for the document image analysis field in this area. The system presented combines state of the art camera-based document
image analysis techniques with a range of complementary tech-nologies to offer fluid Human-Document Interaction. Both fixed and nomadic setups are discussed that have gone through user testing in real-life environments, and use cases are presented that span the spectrum from business to educational application
Address Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.084; 600.077 Approved no
Call Number KPR2016 Serial 2756
Permanent link to this record
 

 
Author Q. Bao; Marçal Rusiñol; M.Coustaty; Muhammad Muzzamil Luqman; C.D. Tran; Jean-Marc Ogier
Title Delaunay triangulation-based features for Camera-based document image retrieval system Type Conference Article
Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 1-6
Keywords Camera-based Document Image Retrieval; Delaunay Triangulation; Feature descriptors; Indexing
Abstract In this paper, we propose a new feature vector, named DElaunay TRIangulation-based Features (DETRIF), for real-time camera-based document image retrieval. DETRIF is computed based on the geometrical constraints from each pair of adjacency triangles in delaunay triangulation which is constructed from centroids of connected components. Besides, we employ a hashing-based indexing system in order to evaluate the performance of DETRIF and to compare it with other systems such as LLAH and SRIF. The experimentation is carried out on two datasets comprising of 400 heterogeneous-content complex linguistic map images (huge size, 9800 X 11768 pixels resolution)and 700 textual document images.
Address Santorini; Greece; April 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.061; 600.084; 600.077 Approved no
Call Number Admin @ si @ BRC2016 Serial 2757
Permanent link to this record
 

 
Author Marc Masana; Joost Van de Weijer; Andrew Bagdanov
Title On-the-fly Network pruning for object detection Type Conference Article
Year 2016 Publication International conference on learning representations Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Object detection with deep neural networks is often performed by passing a few
thousand candidate bounding boxes through a deep neural network for each image.
These bounding boxes are highly correlated since they originate from the same
image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result.
Address Puerto Rico; May 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICLR
Notes LAMP; 600.068; 600.106; 600.079 Approved no
Call Number Admin @ si @MWB2016 Serial 2758
Permanent link to this record
 

 
Author Egils Avots; M. Daneshmanda; Andres Traumann; Sergio Escalera; G. Anbarjafaria
Title Automatic garment retexturing based on infrared information Type Journal Article
Year 2016 Publication Computers & Graphics Abbreviated Journal CG
Volume 59 Issue Pages 28-38
Keywords Garment Retexturing; Texture Mapping; Infrared Images; RGB-D Acquisition Devices; Shading
Abstract This paper introduces a new automatic technique for garment retexturing using a single static image along with the depth and infrared information obtained using the Microsoft Kinect II as the RGB-D acquisition device. First, the garment is segmented out from the image using either the Breadth-First Search algorithm or the semi-automatic procedure provided by the GrabCut method. Then texture domain coordinates are computed for each pixel belonging to the garment using normalised 3D information. Afterwards, shading is applied to the new colours from the texture image. As the main contribution of the proposed method, the latter information is obtained based on extracting a linear map transforming the colour present on the infrared image to that of the RGB colour channels. One of the most important impacts of this strategy is that the resulting retexturing algorithm is colour-, pattern- and lighting-invariant. The experimental results show that it can be used to produce realistic representations, which is substantiated through implementing it under various experimentation scenarios, involving varying lighting intensities and directions. Successful results are accomplished also on video sequences, as well as on images of subjects taking different poses. Based on the Mean Opinion Score analysis conducted on many randomly chosen users, it has been shown to produce more realistic-looking results compared to the existing state-of-the-art methods suggested in the literature. From a wide perspective, the proposed method can be used for retexturing all sorts of segmented surfaces, although the focus of this study is on garment retexturing, and the investigation of the configurations is steered accordingly, since the experiments target an application in the context of virtual fitting rooms.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB; Approved no
Call Number Admin @ si @ ADT2016 Serial 2759
Permanent link to this record
 

 
Author Katerine Diaz; Aura Hernandez-Sabate; Antonio Lopez
Title A reduced feature set for driver head pose estimation Type Journal Article
Year 2016 Publication Applied Soft Computing Abbreviated Journal ASOC
Volume 45 Issue Pages 98-107
Keywords Head pose estimation; driving performance evaluation; subspace based methods; linear regression
Abstract Evaluation of driving performance is of utmost importance in order to reduce road accident rate. Since driving ability includes visual-spatial and operational attention, among others, head pose estimation of the driver is a crucial indicator of driving performance. This paper proposes a new automatic method for coarse and fine head's yaw angle estimation of the driver. We rely on a set of geometric features computed from just three representative facial keypoints, namely the center of the eyes and the nose tip. With these geometric features, our method combines two manifold embedding methods and a linear regression one. In addition, the method has a confidence mechanism to decide if the classification of a sample is not reliable. The approach has been tested using the CMU-PIE dataset and our own driver dataset. Despite the very few facial keypoints required, the results are comparable to the state-of-the-art techniques. The low computational cost of the method and its robustness makes feasible to integrate it in massive consume devices as a real time application.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.085; 600.076; Approved no
Call Number Admin @ si @ DHL2016 Serial 2760
Permanent link to this record
 

 
Author Marc Oliu; Ciprian Corneanu; Laszlo A. Jeni; Jeffrey F. Cohn; Takeo Kanade; Sergio Escalera
Title Continuous Supervised Descent Method for Facial Landmark Localisation Type Conference Article
Year 2016 Publication 13th Asian Conference on Computer Vision Abbreviated Journal
Volume 10112 Issue Pages 121-135
Keywords
Abstract Recent methods for facial landmark location perform well on close-to-frontal faces but have problems in generalising to large head rotations. In order to address this issue we propose a second order linear regression method that is both compact and robust against strong rotations. We provide a closed form solution, making the method fast to train. We test the method’s performance on two challenging datasets. The first has been intensely used by the community. The second has been specially generated from a well known 3D face dataset. It is considerably more challenging, including a high diversity of rotations and more samples than any other existing public dataset. The proposed method is compared against state-of-the-art approaches, including RCPR, CGPRT, LBF, CFSS, and GSDM. Results upon both datasets show that the proposed method offers state-of-the-art performance on near frontal view data, improves state-of-the-art methods on more challenging head rotation problems and keeps a compact model size.
Address Taipei; Taiwan; November 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACCV
Notes HuPBA;MILAB; Approved no
Call Number Admin @ si @ OCJ2016 Serial 2838
Permanent link to this record
 

 
Author Ozan Caglayan; Walid Aransa; Yaxing Wang; Marc Masana; Mercedes Garcıa-Martinez; Fethi Bougares; Loic Barrault; Joost Van de Weijer
Title Does Multimodality Help Human and Machine for Translation and Image Captioning? Type Conference Article
Year 2016 Publication 1st conference on machine translation Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
Address Berlin; Germany; August 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WMT
Notes LAMP; 600.106 ; 600.068 Approved no
Call Number Admin @ si @ CAW2016 Serial 2761
Permanent link to this record
 

 
Author Esteve Cervantes; Long Long Yu; Andrew Bagdanov; Marc Masana; Joost Van de Weijer
Title Hierarchical Part Detection with Deep Neural Networks Type Conference Article
Year 2016 Publication 23rd IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages
Keywords Object Recognition; Part Detection; Convolutional Neural Networks
Abstract Part detection is an important aspect of object recognition. Most approaches apply object proposals to generate hundreds of possible part bounding box candidates which are then evaluated by part classifiers. Recently several methods have investigated directly regressing to a limited set of bounding boxes from deep neural network representation. However, for object parts such methods may be unfeasible due to their relatively small size with respect to the image. We propose a hierarchical method for object and part detection. In a single network we first detect the object and then regress to part location proposals based only on the feature representation inside the object. Experiments show that our hierarchical approach outperforms a network which directly regresses the part locations. We also show that our approach obtains part detection accuracy comparable or better than state-of-the-art on the CUB-200 bird and Fashionista clothing item datasets with only a fraction of the number of part proposals.
Address Phoenix; Arizona; USA; September 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes LAMP; 600.106 Approved no
Call Number Admin @ si @ CLB2016 Serial 2762
Permanent link to this record
 

 
Author Sergio Escalera; Vassilis Athitsos; Isabelle Guyon
Title Challenges in multimodal gesture recognition Type Journal Article
Year 2016 Publication Journal of Machine Learning Research Abbreviated Journal JMLR
Volume 17 Issue Pages 1-54
Keywords Gesture Recognition; Time Series Analysis; Multimodal Data Analysis; Computer Vision; Pattern Recognition; Wearable sensors; Infrared Cameras; KinectTM
Abstract This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the KinectTMrevolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands
of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.
Address
Corporate Author Thesis
Publisher Place of Publication Editor Zhuowen Tu
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB; Approved no
Call Number Admin @ si @ EAG2016 Serial 2764
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen
Title Combining Holistic and Part-based Deep Representations for Computational Painting Categorization Type Conference Article
Year 2016 Publication 6th International Conference on Multimedia Retrieval Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Automatic analysis of visual art, such as paintings, is a challenging inter-disciplinary research problem. Conventional approaches only rely on global scene characteristics by encoding holistic information for computational painting categorization.We argue that such approaches are sub-optimal and that discriminative common visual structures provide complementary information for painting classification. We present an approach that encodes both the global scene layout and discriminative latent common structures for computational painting categorization. The region of interests are automatically extracted, without any manual part labeling, by training class-specific deformable part-based models. Both holistic and region-of-interests are then described using multi-scale dense convolutional features. These features are pooled separately using Fisher vector encoding and concatenated afterwards in a single image representation. Experiments are performed on a challenging dataset with 91 different painters and 13 diverse painting styles. Our approach outperforms the standard method, which only employs the global scene characteristics. Furthermore, our method achieves state-of-the-art results outperforming a recent multi-scale deep features based approach [11] by 6.4% and 3.8% respectively on artist and style classification.
Address New York; USA; June 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICMR
Notes LAMP; 600.068; 600.079;ADAS Approved no
Call Number Admin @ si @ RKW2016 Serial 2763
Permanent link to this record
 

 
Author Cristina Palmero; Jordi Esquirol; Vanessa Bayo; Miquel Angel Cos; Pouya Ahmadmonfared; Joan Salabert; David Sanchez; Sergio Escalera
Title Automatic Sleep System Recommendation by Multi-modal RBG-Depth-Pressure Anthropometric Analysis Type Journal Article
Year 2017 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 122 Issue 2 Pages 212–227
Keywords Sleep system recommendation; RGB-Depth data Pressure imaging; Anthropometric landmark extraction; Multi-part human body segmentation
Abstract This paper presents a novel system for automatic sleep system recommendation using RGB, depth and pressure information. It consists of a validated clinical knowledge-based model that, along with a set of prescription variables extracted automatically, obtains a personalized bed design recommendation. The automatic process starts by performing multi-part human body RGB-D segmentation combining GrabCut, 3D Shape Context descriptor and Thin Plate Splines, to then extract a set of anthropometric landmark points by applying orthogonal plates to the segmented human body. The extracted variables are introduced to the computerized clinical model to calculate body circumferences, weight, morphotype and Body Mass Index categorization. Furthermore, pressure image analysis is performed to extract pressure values and at-risk points, which are also introduced to the model to eventually obtain the final prescription of mattress, topper, and pillow. We validate the complete system in a set of 200 subjects, showing accurate category classification and high correlation results with respect to manual measures.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume (down) Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB; 303.100 Approved no
Call Number Admin @ si @ PEB2017 Serial 2765
Permanent link to this record