Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Albert Clapes; Alex Pardo; Oriol Pujol; Sergio Escalera | ||||
Title | Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly | Type | Journal Article | ||
Year | 2018 | Publication | Machine Vision and Applications | Abbreviated Journal | MVAP |
Volume | 29 | Issue | 5 | Pages | 765–788 |
Keywords | Multimodal activity detection; Computer vision; Inertial sensors; Dense trajectories; Dynamic time warping; Assistive technology | ||||
Abstract | We present a vision-inertial system which combines two RGB-Depth devices together with a wearable inertial movement unit in order to detect activities of the daily living. From multi-view videos, we extract dense trajectories enriched with a histogram of normals description computed from the depth cue and bag them into multi-view codebooks. During the later classification step a multi-class support vector machine with a RBF- 2 kernel combines the descriptions at kernel level. In order to perform action detection from the videos, a sliding window approach is utilized. On the other hand, we extract accelerations, rotation angles, and jerk features from the inertial data collected by the wearable placed on the user’s dominant wrist. During gesture spotting, a dynamic time warping is applied and the aligning costs to a set of pre-selected gesture sub-classes are thresholded to determine possible detections. The outputs of the two modules are combined in a late-fusion fashion. The system is validated in a real-case scenario with elderly from an elder home. Learning-based fusion results improve the ones from the single modalities, demonstrating the success of such multimodal approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ CPP2018 | Serial | 3125 | ||
Permanent link to this record | |||||
Author | Manuel Carbonell; Mauricio Villegas; Alicia Fornes; Josep Llados | ||||
Title | Joint Recognition of Handwritten Text and Named Entities with a Neural End-to-end Model | Type | Conference Article | ||
Year | 2018 | Publication | 13th IAPR International Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 399-404 | ||
Keywords | Named entity recognition; Handwritten Text Recognition; neural networks | ||||
Abstract | When extracting information from handwritten documents, text transcription and named entity recognition are usually faced as separate subsequent tasks. This has the disadvantage that errors in the first module affect heavily the
performance of the second module. In this work we propose to do both tasks jointly, using a single neural network with a common architecture used for plain text recognition. Experimentally, the work has been tested on a collection of historical marriage records. Results of experiments are presented to show the effect on the performance for different configurations: different ways of encoding the information, doing or not transfer learning and processing at text line or multi-line region level. The results are comparable to state of the art reported in the ICDAR 2017 Information Extraction competition, even though the proposed technique does not use any dictionaries, language modeling or post processing. |
||||
Address | Vienna; Austria; April 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.097; 603.057; 601.311; 600.121 | Approved | no | ||
Call Number | Admin @ si @ CVF2018 | Serial | 3170 | ||
Permanent link to this record | |||||
Author | Arnau Baro; Pau Riba; Alicia Fornes | ||||
Title | A Starting Point for Handwritten Music Recognition | Type | Conference Article | ||
Year | 2018 | Publication | 1st International Workshop on Reading Music Systems | Abbreviated Journal | |
Volume | Issue | Pages | 5-6 | ||
Keywords | Optical Music Recognition; Long Short-Term Memory; Convolutional Neural Networks; MUSCIMA++; CVCMUSCIMA | ||||
Abstract | In the last years, the interest in Optical Music Recognition (OMR) has reawakened, especially since the appearance of deep learning. However, there are very few works addressing handwritten scores. In this work we describe a full OMR pipeline for handwritten music scores by using Convolutional and Recurrent Neural Networks that could serve as a baseline for the research community. | ||||
Address | Paris; France; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WORMS | ||
Notes | DAG; 600.097; 601.302; 601.330; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BRF2018 | Serial | 3223 | ||
Permanent link to this record | |||||
Author | Arnau Baro; Pau Riba; Jorge Calvo-Zaragoza; Alicia Fornes | ||||
Title | Optical Music Recognition by Long Short-Term Memory Networks | Type | Book Chapter | ||
Year | 2018 | Publication | Graphics Recognition. Current Trends and Evolutions | Abbreviated Journal | |
Volume | 11009 | Issue | Pages | 81-95 | |
Keywords | Optical Music Recognition; Recurrent Neural Network; Long ShortTerm Memory | ||||
Abstract | Optical Music Recognition refers to the task of transcribing the image of a music score into a machine-readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level. The experimental results are promising, showing the benefits of our approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Editor | A. Fornes, B. Lamiroy | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-030-02283-9 | Medium | ||
Area | Expedition | Conference | GREC | ||
Notes | DAG; 600.097; 601.302; 601.330; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BRC2018 | Serial | 3227 | ||
Permanent link to this record | |||||
Author | Abel Gonzalez-Garcia; Davide Modolo; Vittorio Ferrari | ||||
Title | Objects as context for detecting their semantic parts | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 6907 - 6916 | ||
Keywords | Proposals; Semantics; Wheels; Automobiles; Context modeling; Task analysis; Object detection | ||||
Abstract | We present a semantic part detection approach that effectively leverages object information. We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to
detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare to other part detection methods on both PASCAL-Part and CUB200-2011 datasets. |
||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ GMF2018 | Serial | 3229 | ||
Permanent link to this record | |||||
Author | Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Matthieu Molinier; Jorma Laaksonen | ||||
Title | Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification | Type | Journal Article | ||
Year | 2018 | Publication | ISPRS Journal of Photogrammetry and Remote Sensing | Abbreviated Journal | ISPRS J |
Volume | 138 | Issue | Pages | 74-85 | |
Keywords | Remote sensing; Deep learning; Scene classification; Local Binary Patterns; Texture analysis | ||||
Abstract | Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.109; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ RKW2018 | Serial | 3158 | ||
Permanent link to this record | |||||
Author | Xavier Soria; Angel Sappa; Riad I. Hammoud | ||||
Title | Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Images | Type | Journal Article | ||
Year | 2018 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 18 | Issue | 7 | Pages | 2059 |
Keywords | RGB-NIR sensor; multispectral imaging; deep learning; CNNs | ||||
Abstract | Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm).
This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; MSIAU; 600.086; 600.130; 600.122; 600.118 | Approved | no | ||
Call Number | Admin @ si @ SSH2018 | Serial | 3145 | ||
Permanent link to this record | |||||
Author | Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas | ||||
Title | Cutting Sayre's Knot: Reading Scene Text without Segmentation. Application to Utility Meters | Type | Conference Article | ||
Year | 2018 | Publication | 13th IAPR International Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 97-102 | ||
Keywords | Robust Reading; End-to-end Systems; CNN; Utility Meters | ||||
Abstract | In this paper we present a segmentation-free system for reading text in natural scenes. A CNN architecture is trained in an end-to-end manner, and is able to directly output readings without any explicit text localization step. In order to validate our proposal, we focus on the specific case of reading utility meters. We present our results in a large dataset of images acquired by different users and devices, so text appears in any location, with different sizes, fonts and lengths, and the images present several distortions such as
dirt, illumination highlights or blur. |
||||
Address | Viena; Austria; April 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.084; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ GRK2018 | Serial | 3102 | ||
Permanent link to this record | |||||
Author | Sangheeta Roy; Palaiahnakote Shivakumara; Namita Jain; Vijeta Khare; Anjan Dutta; Umapada Pal; Tong Lu | ||||
Title | Rough-Fuzzy based Scene Categorization for Text Detection and Recognition in Video | Type | Journal Article | ||
Year | 2018 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 80 | Issue | Pages | 64-82 | |
Keywords | Rough set; Fuzzy set; Video categorization; Scene image classification; Video text detection; Video text recognition | ||||
Abstract | Scene image or video understanding is a challenging task especially when number of video types increases drastically with high variations in background and foreground. This paper proposes a new method for categorizing scene videos into different classes, namely, Animation, Outlet, Sports, e-Learning, Medical, Weather, Defense, Economics, Animal Planet and Technology, for the performance improvement of text detection and recognition, which is an effective approach for scene image or video understanding. For this purpose, at first, we present a new combination of rough and fuzzy concept to study irregular shapes of edge components in input scene videos, which helps to classify edge components into several groups. Next, the proposed method explores gradient direction information of each pixel in each edge component group to extract stroke based features by dividing each group into several intra and inter planes. We further extract correlation and covariance features to encode semantic features located inside planes or between planes. Features of intra and inter planes of groups are then concatenated to get a feature matrix. Finally, the feature matrix is verified with temporal frames and fed to a neural network for categorization. Experimental results show that the proposed method outperforms the existing state-of-the-art methods, at the same time, the performances of text detection and recognition methods are also improved significantly due to categorization. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RSJ2018 | Serial | 3096 | ||
Permanent link to this record | |||||
Author | Mark Philip Philipsen; Jacob Velling Dueholm; Anders Jorgensen; Sergio Escalera; Thomas B. Moeslund | ||||
Title | Organ Segmentation in Poultry Viscera Using RGB-D | Type | Journal Article | ||
Year | 2018 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 18 | Issue | 1 | Pages | 117 |
Keywords | semantic segmentation; RGB-D; random forest; conditional random field; 2D; 3D; CNN | ||||
Abstract | We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ PVJ2018 | Serial | 3072 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; C. Canton-Ferrer; Petia Radeva | ||||
Title | Towards social pattern characterization from egocentric photo-streams | Type | Journal Article | ||
Year | 2018 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 171 | Issue | Pages | 104-117 | |
Keywords | Social pattern characterization; Social signal extraction; Lifelogging; Convolutional and recurrent neural networks | ||||
Abstract | Following the increasingly popular trend of social interaction analysis in egocentric vision, this article presents a comprehensive pipeline for automatic social pattern characterization of a wearable photo-camera user. The proposed framework relies merely on the visual analysis of egocentric photo-streams and consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task; finally, LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Experimental evaluation over EgoSocialStyle – the proposed dataset in this work, and EGO-GROUP demonstrates promising results on the task of social pattern characterization from egocentric photo-streams. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ ADC2018 | Serial | 3022 | ||
Permanent link to this record | |||||
Author | Anjan Dutta; Hichem Sahbi | ||||
Title | Stochastic Graphlet Embedding | Type | Journal Article | ||
Year | 2018 | Publication | IEEE Transactions on Neural Networks and Learning Systems | Abbreviated Journal | TNNLS |
Volume | Issue | Pages | 1-14 | ||
Keywords | Stochastic graphlets; Graph embedding; Graph classification; Graph hashing; Betweenness centrality | ||||
Abstract | Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semi-structured data as graphs where nodes correspond to primitives (parts, interest points, segments,
etc.) and edges characterize the relationships between these primitives. However, these non-vectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of – explicit/implicit –graph vectorization and embedding. This embedding process should be resilient to intra-class graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding (SGE) that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 602.167; 602.168; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ DuS2018 | Serial | 3225 | ||
Permanent link to this record | |||||
Author | Xialei Liu; Joost Van de Weijer; Andrew Bagdanov | ||||
Title | Leveraging Unlabeled Data for Crowd Counting by Learning to Rank | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 7661 - 7669 | ||
Keywords | Task analysis; Training; Computer vision; Visualization; Estimation; Head; Context modeling | ||||
Abstract | We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework. To induce a ranking of
cropped images , we use the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. This allows us to address the problem of limited size of existing datasets for crowd counting. We collect two crowd scene datasets from Google using keyword searches and queryby-example image retrieval, respectively. We demonstrate how to efficiently learn from these unlabeled datasets by incorporating learning-to-rank in a multi-task network which simultaneously ranks images and estimates crowd density maps. Experiments on two of the most challenging crowd counting datasets show that our approach obtains state-ofthe-art results. |
||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.109; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LWB2018 | Serial | 3159 | ||
Permanent link to this record | |||||
Author | Oscar Argudo; Marc Comino; Antonio Chica; Carlos Andujar; Felipe Lumbreras | ||||
Title | Segmentation of aerial images for plausible detail synthesis | Type | Journal Article | ||
Year | 2018 | Publication | Computers & Graphics | Abbreviated Journal | CG |
Volume | 71 | Issue | Pages | 23-34 | |
Keywords | Terrain editing; Detail synthesis; Vegetation synthesis; Terrain rendering; Image segmentation | ||||
Abstract | The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0097-8493 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MSIAU; 600.086; 600.118 | Approved | no | ||
Call Number | Admin @ si @ ACC2018 | Serial | 3147 | ||
Permanent link to this record | |||||
Author | David Aldavert; Marçal Rusiñol | ||||
Title | Manuscript text line detection and segmentation using second-order derivatives analysis | Type | Conference Article | ||
Year | 2018 | Publication | 13th IAPR International Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 293 - 298 | ||
Keywords | text line detection; text line segmentation; text region detection; second-order derivatives | ||||
Abstract | In this paper, we explore the use of second-order derivatives to detect text lines on handwritten document images. Taking advantage that the second derivative gives a minimum response when a dark linear element over a
bright background has the same orientation as the filter, we use this operator to create a map with the local orientation and strength of putative text lines in the document. Then, we detect line segments by selecting and merging the filter responses that have a similar orientation and scale. Finally, text lines are found by merging the segments that are within the same text region. The proposed segmentation algorithm, is learning-free while showing a performance similar to the state of the art methods in publicly available datasets. |
||||
Address | Viena; Austria; April 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.084; 600.129; 302.065; 600.121 | Approved | no | ||
Call Number | Admin @ si @ AlR2018a | Serial | 3104 | ||
Permanent link to this record |