toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Rada Deeb; Damien Muselet; Mathieu Hebert; Alain Tremeau; Joost Van de Weijer edit   pdf
openurl 
  Title 3D color charts for camera spectral sensitivity estimation Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue (up) Pages  
  Keywords  
  Abstract Estimating spectral data such as camera sensor responses or illuminant spectral power distribution from raw RGB camera outputs is crucial in many computer vision applications.
Usually, 2D color charts with various patches of known spectral reflectance are
used as reference for such purpose. Deducing n-D spectral data (n»3) from 3D RGB inputs is an ill-posed problem that requires a high number of inputs. Unfortunately, most of the natural color surfaces have spectral reflectances that are well described by low-dimensional linear models, i.e. each spectral reflectance can be approximated by a weighted sum of the others. It has been shown that adding patches to color charts does not help in practice, because the information they add is redundant with the information provided by the first set of patches. In this paper, we propose to use spectral data of
higher dimensionality by using 3D color charts that create inter-reflections between the surfaces. These inter-reflections produce multiplications between natural spectral curves and so provide non-linear spectral curves. We show that such data provide enough information for accurate spectral data estimation.
 
  Address London; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes LAMP; 600.109; 600.120 Approved no  
  Call Number Admin @ si @ DMH2017b Serial 3037  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
doi  openurl
  Title Tex-Nets: Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition Type Conference Article
  Year 2017 Publication 19th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue (up) Pages  
  Keywords Convolutional Neural Networks; Texture Recognition; Local Binary Paterns  
  Abstract Recognizing materials and textures in realistic imaging conditions is a challenging computer vision problem. For many years, local features based orderless representations were a dominant approach for texture recognition. Recently deep local features, extracted from the intermediate layers of a Convolutional Neural Network (CNN), are used as filter banks. These dense local descriptors from a deep model, when encoded with Fisher Vectors, have shown to provide excellent results for texture recognition. The CNN models, employed in such approaches, take RGB patches as input and train on a large amount of labeled images. We show that CNN models, which we call TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard deep models trained on RGB patches. We further investigate two deep architectures, namely early and late fusion, to combine the texture and color information. Experiments on benchmark texture datasets clearly demonstrate that TEX-Nets provide complementary information to standard RGB deep network. Our approach provides a large gain of 4.8%, 3.5%, 2.6% and 4.1% respectively in accuracy on the DTD, KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets, compared to the standard RGB network of the same architecture. Further, our final combination leads to consistent improvements over the state-of-the-art on all four datasets.  
  Address Glasgow; Scothland; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ACM  
  Notes LAMP; 600.109; 600.068; 600.120 Approved no  
  Call Number Admin @ si @ RKW2017 Serial 3038  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
openurl 
  Title Top-Down Deep Appearance Attention for Action Recognition Type Conference Article
  Year 2017 Publication 20th Scandinavian Conference on Image Analysis Abbreviated Journal  
  Volume 10269 Issue (up) Pages 297-309  
  Keywords Action recognition; CNNs; Feature fusion  
  Abstract Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches.  
  Address Tromso; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SCIA  
  Notes LAMP; 600.109; 600.068; 600.120 Approved no  
  Call Number Admin @ si @ RKW2017b Serial 3039  
Permanent link to this record
 

 
Author N.Nayef; F.Yin; I.Bizid; H.Choi; Y.Feng; Dimosthenis Karatzas; Z.Luo; Umapada Pal; Christophe Rigaud; J. Chazalon; W.Khlif; Muhammad Muzzamil Luqman; Jean-Christophe Burie; C.L.Liu; Jean-Marc Ogier edit  doi
isbn  openurl
  Title ICDAR2017 Robust Reading Challenge on Multi-Lingual Scene Text Detection and Script Identification – RRC-MLT Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue (up) Pages 1454-1459  
  Keywords  
  Abstract Text detection and recognition in a natural environment are key components of many applications, ranging from business card digitization to shop indexation in a street. This competition aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text (MLT) in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together. This competition is an extension of the Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context. The proposed competition is presented as a new challenge of the RRC. The dataset built for this challenge largely extends the previous RRC editions in many aspects: the multi-lingual text, the size of the dataset, the multi-oriented text, the wide variety of scenes. The dataset is comprised of 18,000 images which contain text belonging to 9 languages. The challenge is comprised of three tasks related to text detection and script classification. We have received a total of 16 participations from the research and industrial communities. This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge.  
  Address Kyoto; Japan; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-5386-3586-5 Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ NYB2017 Serial 3097  
Permanent link to this record
 

 
Author Debora Gil; Rosa Maria Ortiz; Carles Sanchez; Antoni Rosell edit   pdf
doi  openurl
  Title Objective endoscopic measurements of central airway stenosis. A pilot study Type Journal Article
  Year 2018 Publication Respiration Abbreviated Journal RES  
  Volume 95 Issue (up) Pages 63–69  
  Keywords Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis  
  Abstract Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.075; 600.096; 600.145 Approved no  
  Call Number Admin @ si @ GOS2018 Serial 3043  
Permanent link to this record
 

 
Author Rosa Maria Ortiz; Debora Gil; Elisa Minchole; Marta Diez-Ferrer; Noelia Cubero de Frutos edit   pdf
openurl 
  Title Classification of Confolcal Endomicroscopy Patterns for Diagnosis of Lung Cancer Type Conference Article
  Year 2017 Publication 18th World Conference on Lung Cancer Abbreviated Journal  
  Volume Issue (up) Pages  
  Keywords  
  Abstract Confocal Laser Endomicroscopy (CLE) is an emerging imaging technique that allows the in-vivo acquisition of cell patterns of potentially malignant lesions. Such patterns could discriminate between inflammatory and neoplastic lesions and, thus, serve as a first in-vivo biopsy to discard cases that do not actually require a cell biopsy.

The goal of this work is to explore whether CLE images obtained during videobronchoscopy contain enough visual information to discriminate between benign and malign peripheral lesions for lung cancer diagnosis. To do so, we have performed a pilot comparative study with 12 patients (6 adenocarcinoma and 6 benign-inflammatory) using 2 different methods for CLE pattern analysis: visual analysis by 3 experts and a novel methodology that uses graph methods to find patterns in pre-trained feature spaces. Our preliminary results indicate that although visual analysis can only achieve a 60.2% of accuracy, the accuracy of the proposed unsupervised image pattern classification raises to 84.6%.

We conclude that CLE images visual information allow in-vivo detection of neoplastic lesions and graph structural analysis applied to deep-learning feature spaces can achieve competitive results.
 
  Address Yokohama; Japan; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IASLC WCLC  
  Notes IAM; 600.096; 600.075; 600.145 Approved no  
  Call Number Admin @ si @ OGM2017 Serial 3044  
Permanent link to this record
 

 
Author Debora Gil; Aura Hernandez-Sabate; David Castells; Jordi Carrabina edit   pdf
openurl 
  Title CYBERH: Cyber-Physical Systems in Health for Personalized Assistance Type Conference Article
  Year 2017 Publication International Symposium on Symbolic and Numeric Algorithms for Scientific Computing Abbreviated Journal  
  Volume Issue (up) Pages  
  Keywords  
  Abstract Assistance systems for e-Health applications have some specific requirements that demand of new methods for data gathering, analysis and modeling able to deal with SmallData:
1) systems should dynamically collect data from, both, the environment and the user to issue personalized recommendations; 2) data analysis should be able to tackle a limited number of samples prone to include non-informative data and possibly evolving in time due to changes in patient condition; 3) algorithms should run in real time with possibly limited computational resources and fluctuant internet access.
Electronic medical devices (and CyberPhysical devices in general) can enhance the process of data gathering and analysis in several ways: (i) acquiring simultaneously multiple sensors data instead of single magnitudes (ii) filtering data; (iii) providing real-time implementations condition by isolating tasks in individual processors of multiprocessors Systems-on-chip (MPSoC) platforms and (iv) combining information through sensor fusion
techniques.
Our approach focus on both aspects of the complementary role of CyberPhysical devices and analysis of SmallData in the process of personalized models building for e-Health applications. In particular, we will address the design of Cyber-Physical Systems in Health for Personalized Assistance (CyberHealth) in two specific application cases: 1) A Smart Assisted Driving System (SADs) for dynamical assessment of the driving capabilities of Mild Cognitive Impaired (MCI) people; 2) An Intelligent Operating Room (iOR) for improving the yield of bronchoscopic interventions for in-vivo lung cancer diagnosis.
 
  Address Timisoara; Rumania; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SYNASC  
  Notes IAM; 600.085; 600.096; 600.075; 600.145 Approved no  
  Call Number Admin @ si @ GHC2017 Serial 3045  
Permanent link to this record
 

 
Author Jose M. Armingol; Jorge Alfonso; Nourdine Aliane; Miguel Clavijo; Sergio Campos-Cordobes; Arturo de la Escalera; Javier del Ser; Javier Fernandez; Fernando Garcia; Felipe Jimenez; Antonio Lopez; Mario Mata edit  url
doi  openurl
  Title Environmental Perception for Intelligent Vehicles Type Book Chapter
  Year 2018 Publication Intelligent Vehicles. Enabling Technologies and Future Developments Abbreviated Journal  
  Volume Issue (up) Pages 23–101  
  Keywords Computer vision; laser techniques; data fusion; advanced driver assistance systems; traffic monitoring systems; intelligent vehicles  
  Abstract Environmental perception represents, because of its complexity, a challenge for Intelligent Transport Systems due to the great variety of situations and different elements that can happen in road environments and that must be faced by these systems. In connection with this, so far there are a variety of solutions as regards sensors and methods, so the results of precision, complexity, cost, or computational load obtained by these works are different. In this chapter some systems based on computer vision and laser techniques are presented. Fusion methods are also introduced in order to provide advanced and reliable perception systems.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @AAA2018 Serial 3046  
Permanent link to this record
 

 
Author Antonio Lopez; David Vazquez; Gabriel Villalonga edit  url
openurl 
  Title Data for Training Models, Domain Adaptation Type Book Chapter
  Year 2018 Publication Intelligent Vehicles. Enabling Technologies and Future Developments Abbreviated Journal  
  Volume Issue (up) Pages 395–436  
  Keywords Driving simulator; hardware; software; interface; traffic simulation; macroscopic simulation; microscopic simulation; virtual data; training data  
  Abstract Simulation can enable several developments in the field of intelligent vehicles. This chapter is divided into three main subsections. The first one deals with driving simulators. The continuous improvement of hardware performance is a well-known fact that is allowing the development of more complex driving simulators. The immersion in the simulation scene is increased by high fidelity feedback to the driver. In the second subsection, traffic simulation is explained as well as how it can be used for intelligent transport systems. Finally, it is rather clear that sensor-based perception and action must be based on data-driven algorithms. Simulation could provide data to train and test algorithms that are afterwards implemented in vehicles. These tools are explained in the third subsection.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ LVV2018 Serial 3047  
Permanent link to this record
 

 
Author Albert Berenguel; Oriol Ramos Terrades; Josep Llados; Cristina Cañero edit   pdf
doi  openurl
  Title e-Counterfeit: a mobile-server platform for document counterfeit detection Type Conference Article
  Year 2017 Publication 14th IAPR International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue (up) Pages  
  Keywords  
  Abstract This paper presents a novel application to detect counterfeit identity documents forged by a scan-printing operation. Texture analysis approaches are proposed to extract validation features from security background that is usually printed in documents as IDs or banknotes. The main contribution of this work is the end-to-end mobile-server architecture, which provides a service for non-expert users and therefore can be used in several scenarios. The system also provides a crowdsourcing mode so labeled images can be gathered, generating databases for incremental training of the algorithms.  
  Address Kyoto; Japan; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.061; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ BRL2018 Serial 3084  
Permanent link to this record
 

 
Author Antonio Lopez; Atsushi Imiya; Tomas Pajdla; Jose Manuel Alvarez edit  isbn
openurl 
  Title Computer Vision in Vehicle Technology: Land, Sea & Air Type Book Whole
  Year Publication Computer Vision in Vehicle Technology: Land, Sea & Air Abbreviated Journal  
  Volume Issue (up) Pages  
  Keywords  
  Abstract A unified view of the use of computer vision technology for different types of vehicles

Computer Vision in Vehicle Technology focuses on computer vision as on-board technology, bringing together fields of research where computer vision is progressively penetrating: the automotive sector, unmanned aerial and underwater vehicles. It also serves as a reference for researchers of current developments and challenges in areas of the application of computer vision, involving vehicles such as advanced driver assistance (pedestrian detection, lane departure warning, traffic sign recognition), autonomous driving and robot navigation (with visual simultaneous localization and mapping) or unmanned aerial vehicles (obstacle avoidance, landscape classification and mapping, fire risk assessment).

The overall role of computer vision for the navigation of different vehicles, as well as technology to address on-board applications, is analysed.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-118-86807-2 Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ LIP2017b Serial 3049  
Permanent link to this record
 

 
Author Cesar de Souza; Adrien Gaidon; Yohann Cabon; Antonio Lopez edit   pdf
doi  openurl
  Title Procedural Generation of Videos to Train Deep Action Recognition Networks Type Conference Article
  Year 2017 Publication 30th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue (up) Pages 2594-2604  
  Keywords  
  Abstract Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections. In this work, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for ”Procedural Human Action Videos”. It contains a total of 39, 982 videos, with more than 1, 000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We introduce a deep multi-task representation learning architecture to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF101 and HMDB51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, significantly
outperforming fine-tuning state-of-the-art unsupervised generative models of videos.
 
  Address Honolulu; Hawaii; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes ADAS; 600.076; 600.085; 600.118 Approved no  
  Call Number Admin @ si @ SGC2017 Serial 3051  
Permanent link to this record
 

 
Author Alicia Fornes; Veronica Romero; Arnau Baro; Juan Ignacio Toledo; Joan Andreu Sanchez; Enrique Vidal; Josep Llados edit   pdf
openurl 
  Title ICDAR2017 Competition on Information Extraction in Historical Handwritten Records Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue (up) Pages 1389-1394  
  Keywords  
  Abstract The extraction of relevant information from historical handwritten document collections is one of the key steps in order to make these manuscripts available for access and searches. In this competition, the goal is to detect the named entities and assign each of them a semantic category, and therefore, to simulate the filling in of a knowledge database. This paper describes the dataset, the tasks, the evaluation metrics, the participants methods and the results.  
  Address Kyoto; Japan; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.097; 601.225; 600.121 Approved no  
  Call Number Admin @ si @ FRB2017 Serial 3052  
Permanent link to this record
 

 
Author Pau Riba; Anjan Dutta; Josep Llados; Alicia Fornes; Sounak Dey edit   pdf
openurl 
  Title Improving Information Retrieval in Multiwriter Scenario by Exploiting the Similarity Graph of Document Terms Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue (up) Pages 475-480  
  Keywords document terms; information retrieval; affinity graph; graph of document terms; multiwriter; graph diffusion  
  Abstract Information Retrieval (IR) is the activity of obtaining information resources relevant to a questioned information. It usually retrieves a set of objects ranked according to the relevancy to the needed fact. In document analysis, information retrieval receives a lot of attention in terms of symbol and word spotting. However, through decades the community mostly focused either on printed or on single writer scenario, where the
state-of-the-art results have achieved reasonable performance on the available datasets. Nevertheless, the existing algorithms do not perform accordingly on multiwriter scenario. A graph representing relations between a set of objects is a structure where each node delineates an individual element and the similarity between them is represented as a weight on the connecting edge. In this paper, we explore different analytics of graphs constructed from words or graphical symbols, such as diffusion, shortest path, etc. to improve the performance of information retrieval methods in multiwriter scenario
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.097; 601.302; 600.121 Approved no  
  Call Number Admin @ si @ RDL2017a Serial 3053  
Permanent link to this record
 

 
Author Anjan Dutta; Pau Riba; Josep Llados; Alicia Fornes edit   pdf
openurl 
  Title Pyramidal Stochastic Graphlet Embedding for Document Pattern Classification Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue (up) Pages 33-38  
  Keywords graph embedding; hierarchical graph representation; graph clustering; stochastic graphlet embedding; graph classification  
  Abstract Document pattern classification methods using graphs have received a lot of attention because of its robust representation paradigm and rich theoretical background. However, the way of preserving and the process for delineating documents with graphs introduce noise in the rendition of underlying data, which creates instability in the graph representation. To deal with such unreliability in representation, in this paper, we propose Pyramidal Stochastic Graphlet Embedding (PSGE).
Given a graph representing a document pattern, our method first computes a graph pyramid by successively reducing the base graph. Once the graph pyramid is computed, we apply Stochastic Graphlet Embedding (SGE) for each level of the pyramid and combine their embedded representation to obtain a global delineation of the original graph. The consideration of pyramid of graphs rather than just a base graph extends the representational power of the graph embedding, which reduces the instability caused due to noise and distortion. When plugged with support
vector machine, our proposed PSGE has outperformed the state-of-the-art results in recognition of handwritten words as well as graphical symbols
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.097; 601.302; 600.121 Approved no  
  Call Number Admin @ si @ DRL2017 Serial 3054  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: