toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Mariella Dimiccoli edit   pdf
doi  openurl
  Title Figure-ground segregation: A fully nonlocal approach Type Journal Article
  Year 2016 Publication Vision Research Abbreviated Journal VR  
  Volume 126 Issue Pages 308-317  
  Keywords (up) Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion  
  Abstract We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ Dim2016b Serial 2623  
Permanent link to this record
 

 
Author Egils Avots; M. Daneshmanda; Andres Traumann; Sergio Escalera; G. Anbarjafaria edit   pdf
doi  openurl
  Title Automatic garment retexturing based on infrared information Type Journal Article
  Year 2016 Publication Computers & Graphics Abbreviated Journal CG  
  Volume 59 Issue Pages 28-38  
  Keywords (up) Garment Retexturing; Texture Mapping; Infrared Images; RGB-D Acquisition Devices; Shading  
  Abstract This paper introduces a new automatic technique for garment retexturing using a single static image along with the depth and infrared information obtained using the Microsoft Kinect II as the RGB-D acquisition device. First, the garment is segmented out from the image using either the Breadth-First Search algorithm or the semi-automatic procedure provided by the GrabCut method. Then texture domain coordinates are computed for each pixel belonging to the garment using normalised 3D information. Afterwards, shading is applied to the new colours from the texture image. As the main contribution of the proposed method, the latter information is obtained based on extracting a linear map transforming the colour present on the infrared image to that of the RGB colour channels. One of the most important impacts of this strategy is that the resulting retexturing algorithm is colour-, pattern- and lighting-invariant. The experimental results show that it can be used to produce realistic representations, which is substantiated through implementing it under various experimentation scenarios, involving varying lighting intensities and directions. Successful results are accomplished also on video sequences, as well as on images of subjects taking different poses. Based on the Mean Opinion Score analysis conducted on many randomly chosen users, it has been shown to produce more realistic-looking results compared to the existing state-of-the-art methods suggested in the literature. From a wide perspective, the proposed method can be used for retexturing all sorts of segmented surfaces, although the focus of this study is on garment retexturing, and the investigation of the configurations is steered accordingly, since the experiments target an application in the context of virtual fitting rooms.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ ADT2016 Serial 2759  
Permanent link to this record
 

 
Author Miguel Angel Bautista; Antonio Hernandez; Sergio Escalera; Laura Igual; Oriol Pujol; Josep Moya; Veronica Violant; Maria Teresa Anguera edit   pdf
doi  openurl
  Title A Gesture Recognition System for Detecting Behavioral Patterns of ADHD Type Journal Article
  Year 2016 Publication IEEE Transactions on System, Man and Cybernetics, Part B Abbreviated Journal TSMCB  
  Volume 46 Issue 1 Pages 136-147  
  Keywords (up) Gesture Recognition; ADHD; Gaussian Mixture Models; Convex Hulls; Dynamic Time Warping; Multi-modal RGB-Depth data  
  Abstract We present an application of gesture recognition using an extension of Dynamic Time Warping (DTW) to recognize behavioural patterns of Attention Deficit Hyperactivity Disorder (ADHD). We propose an extension of DTW using one-class classifiers in order to be able to encode the variability of a gesture category, and thus, perform an alignment between a gesture sample and a gesture class. We model the set of gesture samples of a certain gesture category using either GMMs or an approximation of Convex Hulls. Thus, we add a theoretical contribution to classical warping path in DTW by including local modeling of intra-class gesture variability. This methodology is applied in a clinical context, detecting a group of ADHD behavioural patterns defined by experts in psychology/psychiatry, to provide support to clinicians in the diagnose procedure. The proposed methodology is tested on a novel multi-modal dataset (RGB plus Depth) of ADHD children recordings with behavioural patterns. We obtain satisfying results when compared to standard state-of-the-art approaches in the DTW context.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; MILAB; Approved no  
  Call Number Admin @ si @ BHE2016 Serial 2566  
Permanent link to this record
 

 
Author Gerard Canal; Sergio Escalera; Cecilio Angulo edit   pdf
doi  openurl
  Title A Real-time Human-Robot Interaction system based on gestures for assistive scenarios Type Journal Article
  Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 149 Issue Pages 65-77  
  Keywords (up) Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation  
  Abstract Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ CEA2016 Serial 2768  
Permanent link to this record
 

 
Author Sergio Escalera; Vassilis Athitsos; Isabelle Guyon edit  url
openurl 
  Title Challenges in multimodal gesture recognition Type Journal Article
  Year 2016 Publication Journal of Machine Learning Research Abbreviated Journal JMLR  
  Volume 17 Issue Pages 1-54  
  Keywords (up) Gesture Recognition; Time Series Analysis; Multimodal Data Analysis; Computer Vision; Pattern Recognition; Wearable sensors; Infrared Cameras; KinectTM  
  Abstract This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the KinectTMrevolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands
of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor Zhuowen Tu  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ EAG2016 Serial 2764  
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas edit   pdf
doi  openurl
  Title Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices Type Journal Article
  Year 2016 Publication Neurocomputing Abbreviated Journal NEUCOM  
  Volume 175 Issue B Pages 866–876  
  Keywords (up) Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices  
  Abstract During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.072; 600.068; Approved no  
  Call Number Admin @ si @ TRM2016 Serial 2721  
Permanent link to this record
 

 
Author Katerine Diaz; Aura Hernandez-Sabate; Antonio Lopez edit   pdf
doi  openurl
  Title A reduced feature set for driver head pose estimation Type Journal Article
  Year 2016 Publication Applied Soft Computing Abbreviated Journal ASOC  
  Volume 45 Issue Pages 98-107  
  Keywords (up) Head pose estimation; driving performance evaluation; subspace based methods; linear regression  
  Abstract Evaluation of driving performance is of utmost importance in order to reduce road accident rate. Since driving ability includes visual-spatial and operational attention, among others, head pose estimation of the driver is a crucial indicator of driving performance. This paper proposes a new automatic method for coarse and fine head's yaw angle estimation of the driver. We rely on a set of geometric features computed from just three representative facial keypoints, namely the center of the eyes and the nose tip. With these geometric features, our method combines two manifold embedding methods and a linear regression one. In addition, the method has a confidence mechanism to decide if the classification of a sample is not reliable. The approach has been tested using the CMU-PIE dataset and our own driver dataset. Despite the very few facial keypoints required, the results are comparable to the state-of-the-art techniques. The low computational cost of the method and its robustness makes feasible to integrate it in massive consume devices as a real time application.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.085; 600.076; Approved no  
  Call Number Admin @ si @ DHL2016 Serial 2760  
Permanent link to this record
 

 
Author Cristina Palmero; Albert Clapes; Chris Bahnsen; Andreas Møgelmose; Thomas B. Moeslund; Sergio Escalera edit   pdf
doi  openurl
  Title Multi-modal RGB-Depth-Thermal Human Body Segmentation Type Journal Article
  Year 2016 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 118 Issue 2 Pages 217-239  
  Keywords (up) Human body segmentation; RGB ; Depth Thermal  
  Abstract This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB–depth–thermal dataset along with a multi-modal segmentation baseline. The several modalities are registered using a calibration device and a registration algorithm. Our baseline extracts regions of interest using background subtraction, defines a partitioning of the foreground regions into cells, computes a set of image features on those cells using different state-of-the-art feature extractions, and models the distribution of the descriptors per cell using probabilistic models. A supervised learning algorithm then fuses the output likelihoods over cells in a stacked feature vector representation. The baseline, using Gaussian mixture models for the probabilistic modeling and Random Forest for the stacked learning, is superior to other state-of-the-art methods, obtaining an overlap above 75 % on the novel dataset when compared to the manually annotated ground-truth of human segmentations.  
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ PCB2016 Serial 2767  
Permanent link to this record
 

 
Author Wenjuan Gong; Xuena Zhang; Jordi Gonzalez; Andrews Sobral; Thierry Bouwmans; Changhe Tu; El-hadi Zahzah edit   pdf
url  doi
openurl 
  Title Human Pose Estimation from Monocular Images: A Comprehensive Survey Type Journal Article
  Year 2016 Publication Sensors Abbreviated Journal SENS  
  Volume 16 Issue 12 Pages 1966  
  Keywords (up) human pose estimation; human bodymodels; generativemethods; discriminativemethods; top-down methods; bottom-up methods  
  Abstract Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling
methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ GZG2016 Serial 2933  
Permanent link to this record
 

 
Author Angel Sappa; P. Carvajal; Cristhian A. Aguilera-Carrasco; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla edit   pdf
doi  openurl
  Title Wavelet based visible and infrared image fusion: a comparative study Type Journal Article
  Year 2016 Publication Sensors Abbreviated Journal SENS  
  Volume 16 Issue 6 Pages 1-15  
  Keywords (up) Image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.086; 600.076 Approved no  
  Call Number Admin @ si @SCA2016 Serial 2807  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira edit   pdf
doi  openurl
  Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Abbreviated Journal RAS  
  Volume 83 Issue Pages 312-325  
  Keywords (up) Incremental scene reconstruction; Point clouds; Autonomous vehicles; Polygonal primitives  
  Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.086, 600.076 Approved no  
  Call Number Admin @ si @OSS2016a Serial 2806  
Permanent link to this record
 

 
Author Tadashi Araki; Sumit K. Banchhor; Narendra D. Londhe; Nobutaka Ikeda; Petia Radeva; Devarshi Shukla; Luca Saba; Antonella Balestrieri; Andrew Nicolaides; Shoaib Shafique; John R. Laird; Jasjit S. Suri edit  doi
openurl 
  Title Reliable and Accurate Calcium Volume Measurement in Coronary Artery Using Intravascular Ultrasound Videos Type Journal Article
  Year 2016 Publication Journal of Medical Systems Abbreviated Journal JMS  
  Volume 40 Issue 3 Pages 51:1-51:20  
  Keywords (up) Interventional cardiology; Atherosclerosis; Coronary arteries; IVUS; calcium volume; Soft computing; Performance Reliability; Accuracy  
  Abstract Quantitative assessment of calcified atherosclerotic volume within the coronary artery wall is vital for cardiac interventional procedures. The goal of this study is to automatically measure the calcium volume, given the borders of coronary vessel wall for all the frames of the intravascular ultrasound (IVUS) video. Three soft computing fuzzy classification techniques were adapted namely Fuzzy c-Means (FCM), K-means, and Hidden Markov Random Field (HMRF) for automated segmentation of calcium regions and volume computation. These methods were benchmarked against previously developed threshold-based method. IVUS image data sets (around 30,600 IVUS frames) from 15 patients were collected using 40 MHz IVUS catheter (Atlantis® SR Pro, Boston Scientific®, pullback speed of 0.5 mm/s). Calcium mean volume for FCM, K-means, HMRF and threshold-based method were 37.84 ± 17.38 mm3, 27.79 ± 10.94 mm3, 46.44 ± 19.13 mm3 and 35.92 ± 16.44 mm3 respectively. Cross-correlation, Jaccard Index and Dice Similarity were highest between FCM and threshold-based method: 0.99, 0.92 ± 0.02 and 0.95 + 0.02 respectively. Student’s t-test, z-test and Wilcoxon-test are also performed to demonstrate consistency, reliability and accuracy of the results. Given the vessel wall region, the system reliably and automatically measures the calcium volume in IVUS videos. Further, we validated our system against a trained expert using scoring: K-means showed the best performance with an accuracy of 92.80 %. Out procedure and protocol is along the line with method previously published clinically.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ ABL2016 Serial 2729  
Permanent link to this record
 

 
Author Antoni Gurgui; Debora Gil; Enric Marti; Vicente Grau edit  doi
openurl 
  Title Left-Ventricle Basal Region Constrained Parametric Mapping to Unitary Domain Type Conference Article
  Year 2016 Publication 7th International Workshop on Statistical Atlases & Computational Modelling of the Heart Abbreviated Journal  
  Volume 10124 Issue Pages 163-171  
  Keywords (up) Laplacian; Constrained maps; Parameterization; Basal ring  
  Abstract Due to its complex geometry, the basal ring is often omitted when putting different heart geometries into correspondence. In this paper, we present the first results on a new mapping of the left ventricle basal rings onto a normalized coordinate system using a fold-over free approach to the solution to the Laplacian. To guarantee correspondences between different basal rings, we imposed some internal constrained positions at anatomical landmarks in the normalized coordinate system. To prevent internal fold-overs, constraints are handled by cutting the volume into regions defined by anatomical features and mapping each piece of the volume separately. Initial results presented in this paper indicate that our method is able to handle internal constrains without introducing fold-overs and thus guarantees one-to-one mappings between different basal ring geometries.  
  Address Athens; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference STACOM  
  Notes IAM; Approved no  
  Call Number Admin @ si @ GGM2016 Serial 2884  
Permanent link to this record
 

 
Author Sounak Dey; Anguelos Nicolaou; Josep Llados; Umapada Pal edit   pdf
doi  openurl
  Title Local Binary Pattern for Word Spotting in Handwritten Historical Document Type Conference Article
  Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal  
  Volume Issue Pages 574-583  
  Keywords (up) Local binary patterns; Spatial sampling; Learning-free; Word spotting; Handwritten; Historical document analysis; Large-scale data  
  Abstract Digital libraries store images which can be highly degraded and to index this kind of images we resort to word spotting as our information retrieval system. Information retrieval for handwritten document images is more challenging due to the difficulties in complex layout analysis, large variations of writing styles, and degradation or low quality of historical manuscripts. This paper presents a simple innovative learning-free method for word spotting from large scale historical documents combining Local Binary Pattern (LBP) and spatial sampling. This method offers three advantages: firstly, it operates in completely learning free paradigm which is very different from unsupervised learning methods, secondly, the computational time is significantly low because of the LBP features, which are very fast to compute, and thirdly, the method can be used in scenarios where annotations are not available. Finally, we compare the results of our proposed retrieval method with other methods in the literature and we obtain the best results in the learning free paradigm.  
  Address Merida; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference S+SSPR  
  Notes DAG; 600.097; 602.006; 603.053 Approved no  
  Call Number Admin @ si @ DNL2016 Serial 2876  
Permanent link to this record
 

 
Author Marçal Rusiñol; J. Chazalon; Jean-Marc Ogier edit   pdf
openurl 
  Title Filtrage de descripteurs locaux pour l'amélioration de la détection de documents Type Conference Article
  Year 2016 Publication Colloque International Francophone sur l'Écrit et le Document Abbreviated Journal  
  Volume Issue Pages  
  Keywords (up) Local descriptors; mobile capture; document matching; keypoint selection  
  Abstract In this paper we propose an effective method aimed at reducing the amount of local descriptors to be indexed in a document matching framework.In an off-line training stage, the matching between the model document and incoming images is computed retaining the local descriptors from the model that steadily produce good matches. We have evaluated this approach by using the ICDAR2015 SmartDOC dataset containing near 25000 images from documents to be captured by a mobile device. We have tested the performance of this filtering step by using ORB and SIFT local detectors and descriptors. The results show an important gain both in quality of the final matching as well as in time and space requirements.  
  Address Toulouse; France; March 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CIFED  
  Notes DAG; 600.084; 600.077 Approved no  
  Call Number Admin @ si @ RCO2016 Serial 2755  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: