Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–14] |
Records | |||||
---|---|---|---|---|---|
Author | Jose Carlos Rubio; Joan Serrat; Antonio Lopez; Daniel Ponsa | ||||
Title | Multiple target tracking for intelligent headlights control | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 13 | Issue | 2 | Pages | 594-605 |
Keywords | Intelligent Headlights | ||||
Abstract | Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ RLP2012; ADAS @ adas @ rsl2012g | Serial | 1877 | ||
Permanent link to this record | |||||
Author | Francesco Ciompi | ||||
Title | Multi-Class Learning for Vessel Characterization in Intravascular Ultrasound | Type | Book Whole | ||
Year | 2012 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this thesis we tackle the problem of automatic characterization of human coronary vessel in Intravascular Ultrasound (IVUS) image modality. The basis for the whole characterization process is machine learning applied to multi-class problems. In all the presented approaches, the Error-Correcting Output Codes (ECOC) framework is used as central element for the design of multi-class classifiers.
Two main topics are tackled in this thesis. First, the automatic detection of the vessel borders is presented. For this purpose, a novel context-aware classifier for multi-class classification of the vessel morphology is presented, namely ECOC-DRF. Based on ECOC-DRF, the lumen border and the media-adventitia border in IVUS are robustly detected by means of a novel holistic approach, achieving an error comparable with inter-observer variability and with state of the art methods. The two vessel borders define the atheroma area of the vessel. In this area, tissue characterization is required. For this purpose, we present a framework for automatic plaque characterization by processing both texture in IVUS images and spectral information in raw Radio Frequency data. Furthermore, a novel method for fusing in-vivo and in-vitro IVUS data for plaque characterization is presented, namely pSFFS. The method demonstrates to effectively fuse data generating a classifier that improves the tissue characterization in both in-vitro and in-vivo datasets. A novel method for automatic video summarization in IVUS sequences is also presented. The method aims to detect the key frames of the sequence, i.e., the frames representative of morphological changes. This novel method represents the basis for video summarization in IVUS as well as the markers for the partition of the vessel into morphological and clinically interesting events. Finally, multi-class learning based on ECOC is applied to lung tissue characterization in Computed Tomography. The novel proposed approach, based on supervised and unsupervised learning, achieves accurate tissue classification on a large and heterogeneous dataset. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Petia Radeva;Oriol Pujol | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Cio2012 | Serial | 2146 | ||
Permanent link to this record | |||||
Author | Cesar Isaza; Joaquin Salas; Bogdan Raducanu | ||||
Title | Synthetic ground truth dataset to detect shadow cast by static objects in outdoor | Type | Conference Article | ||
Year | 2012 | Publication | 1st International Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision Applications | Abbreviated Journal | |
Volume | Issue | Pages | art. 11 | ||
Keywords | |||||
Abstract | In this paper, we propose a precise synthetic ground truth dataset to study the problem of detection of the shadows cast by static objects in outdoor environments during extended periods of time (days). For our dataset, we have created a virtual scenario using a rendering software. To increase the realism of the simulated environment, we have defined the scenario in a precise geographical location. In our dataset the sun is by far the main illumination source. The sun position during the simulation time takes into consideration factors related to the geographical location, such as the latitude, longitude, elevation above sea level, and precise image capturing day and time. In our simulation the camera remains fixed. The dataset consists of seven days of simulation, from 10:00am to 5:00pm. Images are captured every 10 seconds. The shadows' ground truth is automatically computed by the rendering software. | ||||
Address | Capri, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | ACM | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-1405-3 | Medium | ||
Area | Expedition | Conference | VIGTA | ||
Notes | OR;MV | Approved | no | ||
Call Number | Admin @ si @ ISR2012a | Serial | 2037 | ||
Permanent link to this record | |||||
Author | Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades | ||||
Title | Text/graphic separation using a sparse representation with multi-learned dictionaries | Type | Conference Article | ||
Year | 2012 | Publication | 21st International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Graphics Recognition; Layout Analysis; Document Understandin | ||||
Abstract | In this paper, we propose a new approach to extract text regions from graphical documents. In our method, we first empirically construct two sequences of learned dictionaries for the text and graphical parts respectively. Then, we compute the sparse representations of all different sizes and non-overlapped document patches in these learned dictionaries. Based on these representations, each patch can be classified into the text or graphic category by comparing its reconstruction errors. Same-sized patches in one category are then merged together to define the corresponding text or graphic layers which are combined to createfinal text/graphic layer. Finally, in a post-processing step, text regions are further filtered out by using some learned thresholds. | ||||
Address | Tsukuba | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ DTR2012a | Serial | 2135 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Miguel Reyes; Victor Ponce; Sergio Escalera | ||||
Title | GrabCut-Based Human Segmentation in Video Sequences | Type | Journal Article | ||
Year | 2012 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 12 | Issue | 11 | Pages | 15376-15393 |
Keywords | segmentation; human pose recovery; GrabCut; GraphCut; Active Appearance Models; Conditional Random Field | ||||
Abstract | In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ HRP2012 | Serial | 2147 | ||
Permanent link to this record | |||||
Author | Marina Alberti; Simone Balocco; Carlo Gatta; Francesco Ciompi; Oriol Pujol; Joana Silva; Xavier Carrillo; Petia Radeva | ||||
Title | Automatic Bifurcation Detection in Coronary IVUS Sequences | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transactions on Biomedical Engineering | Abbreviated Journal | TBME |
Volume | 59 | Issue | 4 | Pages | 1022-2031 |
Keywords | |||||
Abstract | In this paper, we present a fully automatic method which identifies every bifurcation in an intravascular ultrasound (IVUS) sequence, the corresponding frames, the angular orientation with respect to the IVUS acquisition, and the extension. This goal is reached using a two-level classification scheme: first, a classifier is applied to a set of textural features extracted from each image of a sequence. A comparison among three state-of-the-art discriminative classifiers (AdaBoost, random forest, and support vector machine) is performed to identify the most suitable method for the branching detection task. Second, the results are improved by exploiting contextual information using a multiscale stacked sequential learning scheme. The results are then successively refined using a-priori information about branching dimensions and geometry. The proposed approach provides a robust tool for the quick review of pullback sequences, facilitating the evaluation of the lesion at bifurcation sites. The proposed method reaches an F-Measure score of 86.35%, while the F-Measure scores for inter- and intraobserver variability are 71.63% and 76.18%, respectively. The obtained results are positive. Especially, considering the branching detection task is very challenging, due to high variability in bifurcation dimensions and appearance. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0018-9294 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ ABG2012 | Serial | 1996 | ||
Permanent link to this record | |||||
Author | Diego Cheda; Daniel Ponsa; Antonio Lopez | ||||
Title | Monocular Depth-based Background Estimation | Type | Conference Article | ||
Year | 2012 | Publication | 7th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 323-328 | ||
Keywords | |||||
Abstract | In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences. | ||||
Address | Roma | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ CPL2012b; ADAS @ adas @ cpl2012e | Serial | 2012 | ||
Permanent link to this record | |||||
Author | Michael Holte; Bhaskar Chakraborty; Jordi Gonzalez; Thomas B. Moeslund | ||||
Title | A Local 3D Motion Descriptor for Multi-View Human Action Recognition from 4D Spatio-Temporal Interest Points | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Journal of Selected Topics in Signal Processing | Abbreviated Journal | J-STSP |
Volume | 6 | Issue | 5 | Pages | 553-565 |
Keywords | |||||
Abstract | In this paper, we address the problem of human action recognition in reconstructed 3-D data acquired by multi-camera systems. We contribute to this field by introducing a novel 3-D action recognition approach based on detection of 4-D (3-D space $+$ time) spatio-temporal interest points (STIPs) and local description of 3-D motion features. STIPs are detected in multi-view images and extended to 4-D using 3-D reconstructions of the actors and pixel-to-vertex correspondences of the multi-camera setup. Local 3-D motion descriptors, histogram of optical 3-D flow (HOF3D), are extracted from estimated 3-D optical flow in the neighborhood of each 4-D STIP and made view-invariant. The local HOF3D descriptors are divided using 3-D spatial pyramids to capture and improve the discrimination between arm- and leg-based actions. Based on these pyramids of HOF3D descriptors we build a bag-of-words (BoW) vocabulary of human actions, which is compressed and classified using agglomerative information bottleneck (AIB) and support vector machines (SVMs), respectively. Experiments on the publicly available i3DPost and IXMAS datasets show promising state-of-the-art results and validate the performance and view-invariance of the approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1932-4553 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ HCG2012 | Serial | 1994 | ||
Permanent link to this record | |||||
Author | Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny | ||||
Title | Efficient Exemplar Word Spotting | Type | Conference Article | ||
Year | 2012 | Publication | 23rd British Machine Vision Conference | Abbreviated Journal | |
Volume | Issue | Pages | 67.1- 67.11 | ||
Keywords | |||||
Abstract | In this paper we propose an unsupervised segmentation-free method for word spotting in document images.
Documents are represented with a grid of HOG descriptors, and a sliding window approach is used to locate the document regions that are most similar to the query. We use the exemplar SVM framework to produce a better representation of the query in an unsupervised way. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 1-901725-46-4 | Medium | ||
Area | Expedition | Conference | BMVC | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ AGF2012 | Serial | 1984 | ||
Permanent link to this record | |||||
Author | Jon Almazan; David Fernandez; Alicia Fornes; Josep Llados; Ernest Valveny | ||||
Title | A Coarse-to-Fine Approach for Handwritten Word Spotting in Large Scale Historical Documents Collection | Type | Conference Article | ||
Year | 2012 | Publication | 13th International Conference on Frontiers in Handwriting Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 453-458 | ||
Keywords | |||||
Abstract | In this paper we propose an approach for word spotting in handwritten document images. We state the problem from a focused retrieval perspective, i.e. locating instances of a query word in a large scale dataset of digitized manuscripts. We combine two approaches, namely one based on word segmentation and another one segmentation-free. The first approach uses a hashing strategy to coarsely prune word images that are unlikely to be instances of the query word. This process is fast but has a low precision due to the errors introduced in the segmentation step. The regions containing candidate words are sent to the second process based on a state of the art technique from the visual object detection field. This discriminative model represents the appearance of the query word and computes a similarity score. In this way we propose a coarse-to-fine approach achieving a compromise between efficiency and accuracy. The validation of the model is shown using a collection of old handwritten manuscripts. We appreciate a substantial improvement in terms of precision regarding the previous proposed method with a low computational cost increase. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4673-2262-1 | Medium | ||
Area | Expedition | Conference | ICFHR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ AFF2012 | Serial | 1983 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Josep Llados | ||||
Title | The Role of the Users in Handwritten Word Spotting Applications: Query Fusion and Relevance Feedback | Type | Conference Article | ||
Year | 2012 | Publication | 13th International Conference on Frontiers in Handwriting Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 55-60 | ||
Keywords | |||||
Abstract | In this paper we present the importance of including the user in the loop in a handwritten word spotting framework. Several off-the-shelf query fusion and relevance feedback strategies have been tested in the handwritten word spotting context. The increase in terms of precision when the user is included in the loop is assessed using two datasets of historical handwritten documents and a baseline word spotting approach based on a bag-of-visual-words model. | ||||
Address | Bari, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4673-2262-1 | Medium | ||
Area | Expedition | Conference | ICFHR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ RuL2012 | Serial | 2054 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Dimosthenis Karatzas; Andrew Bagdanov; Josep Llados | ||||
Title | Multipage Document Retrieval by Textual and Visual Representations | Type | Conference Article | ||
Year | 2012 | Publication | 21st International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 521-524 | ||
Keywords | |||||
Abstract | In this paper we present a multipage administrative document image retrieval system based on textual and visual representations of document pages. Individual pages are represented by textual or visual information using a bag-of-words framework. Different fusion strategies are evaluated which allow the system to perform multipage document retrieval on the basis of a single page retrieval system. Results are reported on a large dataset of document images sampled from a banking workflow. | ||||
Address | Tsukuba Science City, Japan | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1051-4651 | ISBN | 978-1-4673-2216-4 | Medium | |
Area | Expedition | Conference | ICPR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ RKB2012 | Serial | 2053 | ||
Permanent link to this record | |||||
Author | Ekaterina Zaytseva; Santiago Segui; Jordi Vitria | ||||
Title | Sketchable Histograms of Oriented Gradients for Object Detection | Type | Conference Article | ||
Year | 2012 | Publication | 17th Iberomerican Conference on Pattern Recognition | Abbreviated Journal | |
Volume | 7441 | Issue | Pages | 374-381 | |
Keywords | |||||
Abstract | In this paper we investigate a new representation approach for visual object recognition. The new representation, called sketchable-HoG, extends the classical histogram of oriented gradients (HoG) feature by adding two different aspects: the stability of the majority orientation and the continuity of gradient orientations. In this way, the sketchable-HoG locally characterizes the complexity of an object model and introduces global structure information while still keeping simplicity, compactness and robustness. We evaluated the proposed image descriptor on publicly Catltech 101 dataset. The obtained results outperforms classical HoG descriptor as well as other reported descriptors in the literature. | ||||
Address | Buenos Aires, Argentina | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-33274-6 | Medium | |
Area | Expedition | Conference | CIARP | ||
Notes | OR; MILAB;MV | Approved | no | ||
Call Number | Admin @ si @ ZSV2012 | Serial | 2048 | ||
Permanent link to this record | |||||
Author | Yainuvis Socarras; David Vazquez; Antonio Lopez; David Geronimo; Theo Gevers | ||||
Title | Improving HOG with Image Segmentation: Application to Human Detection | Type | Conference Article | ||
Year | 2012 | Publication | 11th International Conference on Advanced Concepts for Intelligent Vision Systems | Abbreviated Journal | |
Volume | 7517 | Issue | Pages | 178-189 | |
Keywords | Segmentation; Pedestrian Detection | ||||
Abstract | In this paper we improve the histogram of oriented gradients (HOG), a core descriptor of state-of-the-art object detection, by the use of higher-level information coming from image segmentation. The idea is to re-weight the descriptor while computing it without increasing its size. The benefits of the proposal are two-fold: (i) to improve the performance of the detector by enriching the descriptor information and (ii) take advantage of the information of image segmentation, which in fact is likely to be used in other stages of the detection system such as candidate generation or refinement.
We test our technique in the INRIA person dataset, which was originally developed to test HOG, embedding it in a human detection system. The well-known segmentation method, mean-shift (from smaller to larger super-pixels), and different methods to re-weight the original descriptor (constant, region-luminance, color or texture-dependent) has been evaluated. We achieve performance improvements of 4:47% in detection rate through the use of differences of color between contour pixel neighborhoods as re-weighting function. |
||||
Address | Brno, Czech Republic | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | J. Blanc-Talon et al. | |
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-33139-8 | Medium | |
Area | Expedition | Conference | ACIVS | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | ADAS @ adas @ SLV2012 | Serial | 1980 | ||
Permanent link to this record | |||||
Author | Francisco Cruz; Oriol Ramos Terrades | ||||
Title | Document segmentation using relative location features | Type | Conference Article | ||
Year | 2012 | Publication | 21st International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1562-1565 | ||
Keywords | |||||
Abstract | In this paper we evaluate the use of Relative Location Features (RLF) on a historical document segmentation task, and compare the quality of the results obtained on structured and unstructured documents using RLF and not using them. We prove that using these features improve the final segmentation on documents with a strong structure, while their application on unstructured documents does not show significant improvement. Although this paper is not focused on segmenting unstructured documents, results obtained on a benchmark dataset are equal or even overcome previous results of similar works. | ||||
Address | Tsukuba Science City, Japan | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ CrR2012 | Serial | 2051 | ||
Permanent link to this record |