Home | [111–120] << 121 122 123 124 125 126 127 128 129 130 >> [131–140] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | R. Valenti; N. Sebe; Theo Gevers | ||||
Title | What are you looking at? Improving Visual gaze Estimation by Saliency | Type | Journal Article | ||
Year | 2012 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 98 | Issue | 3 | Pages | 324-334 |
Keywords | |||||
Abstract | Impact factor 2010: 5.15
Impact factor 2011/12?: 5.36 In this paper we present a novel mechanism to obtain enhanced gaze estimation for subjects looking at a scene or an image. The system makes use of prior knowledge about the scene (e.g. an image on a computer screen), to define a probability map of the scene the subject is gazing at, in order to find the most probable location. The proposed system helps in correcting the fixations which are erroneously estimated by the gaze estimation device by employing a saliency framework to adjust the resulting gaze point vector. The system is tested on three scenarios: using eye tracking data, enhancing a low accuracy webcam based eye tracker, and using a head pose tracker. The correlation between the subjects in the commercial eye tracking data is improved by an average of 13.91%. The correlation on the low accuracy eye gaze tracker is improved by 59.85%, and for the head pose tracker we obtain an improvement of 10.23%. These results show the potential of the system as a way to enhance and self-calibrate different visual gaze estimation systems. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0920-5691 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ VSG2012 | Serial | 1848 | ||
Permanent link to this record | |||||
Author | R. Valenti; Theo Gevers | ||||
Title | Accurate Eye Center Location through Invariant Isocentric Patterns | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transaction on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 34 | Issue | 9 | Pages | 1785-1798 |
Keywords | |||||
Abstract | Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96 Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ VaG 2012a | Serial | 1849 | ||
Permanent link to this record | |||||
Author | Arjan Gijsenij; Theo Gevers; Joost Van de Weijer | ||||
Title | Improving Color Constancy by Photometric Edge Weighting | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transaction on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 34 | Issue | 5 | Pages | 918-929 |
Keywords | |||||
Abstract | : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy. | ||||
Address | Los Alamitos; CA; USA; | ||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | CIC;ISE | Approved | no | ||
Call Number | Admin @ si @ GGW2012 | Serial | 1850 | ||
Permanent link to this record | |||||
Author | R. Valenti; Theo Gevers | ||||
Title | Combining Head Pose and Eye Location Information for Gaze Estimation | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 21 | Issue | 2 | Pages | 802-815 |
Keywords | |||||
Abstract | Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32 Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ VaG 2012b | Serial | 1851 | ||
Permanent link to this record | |||||
Author | Arjan Gijsenij; R. Lu; Theo Gevers; De Xu | ||||
Title | Color Constancy for Multiple Light Source | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 21 | Issue | 2 | Pages | 697-707 |
Keywords | |||||
Abstract | Impact factor 2010: 2.92
Impact factor 2011/2012?: 3.32 Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ GLG2012a | Serial | 1852 | ||
Permanent link to this record | |||||
Author | Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers | ||||
Title | A Statistical Method for 2D Facial Landmarking | Type | Journal Article | ||
Year | 2012 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 21 | Issue | 2 | Pages | 844-858 |
Keywords | |||||
Abstract | IF = 3.32
Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE). |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ DSG 2012 | Serial | 1853 | ||
Permanent link to this record | |||||
Author | Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan | ||||
Title | Interactive layout analysis and transcription systems for historic handwritten documents | Type | Conference Article | ||
Year | 2010 | Publication | 10th ACM Symposium on Document Engineering | Abbreviated Journal | |
Volume | Issue | Pages | 219–222 | ||
Keywords | Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis | ||||
Abstract | The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process. | ||||
Address | Manchester, United Kingdom | ||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @RTS2010 | Serial | 1857 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Sergio Escalera; Xavier Baro; Oriol Pujol; Cecilio Angulo | ||||
Title | Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D | Type | Journal Article | ||
Year | 2014 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 50 | Issue | 1 | Pages | 112-121 |
Keywords | RGB-D; Bag-of-Words; Dynamic Time Warping; Human Gesture Recognition | ||||
Abstract | PATREC5825
We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences. A Bag-of-Visual-and-Depth-Words (BoVDW) model is introduced as an extension of the Bag-of-Visual-Words (BoVW) model. State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form. The method is integrated in a Human Gesture Recognition pipeline, together with a novel probability-based Dynamic Time Warping (PDTW) algorithm which is used to perform prior segmentation of idle gestures. The proposed DTW variant uses samples of the same gesture category to build a Gaussian Mixture Model driven probabilistic model of that gesture class. Results of the whole Human Gesture Recognition pipeline in a public data set show better performance in comparison to both standard BoVW model and DTW approach. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MV; 605.203 | Approved | no | ||
Call Number | Admin @ si @ HBP2014 | Serial | 2353 | ||
Permanent link to this record | |||||
Author | G.D. Evangelidis; Ferran Diego; Joan Serrat; Antonio Lopez | ||||
Title | Slice Matching for Accurate Spatio-Temporal Alignment | Type | Conference Article | ||
Year | 2011 | Publication | In ICCV Workshop on Visual Surveillance | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | video alignment | ||||
Abstract | Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VS | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ EDS2011; ADAS @ adas @ eds2011a | Serial | 1861 | ||
Permanent link to this record | |||||
Author | Gemma Roig; Xavier Boix; F. de la Torre; Joan Serrat; C. Vilella | ||||
Title | Hierarchical CRF with product label spaces for parts-based Models | Type | Conference Article | ||
Year | 2011 | Publication | IEEE Conference on Automatic Face and Gesture Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 657-664 | ||
Keywords | Shape; Computational modeling; Principal component analysis; Random variables; Color; Upper bound; Facial features | ||||
Abstract | Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset. | ||||
Address | Santa Barbara, CA, USA, 2011 | ||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FG | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ RBT2011 | Serial | 1862 | ||
Permanent link to this record | |||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell | ||||
Title | Portmanteau Vocabularies for Multi-Cue Image Representation | Type | Conference Article | ||
Year | 2011 | Publication | 25th Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NIPS | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ KWB2011 | Serial | 1865 | ||
Permanent link to this record | |||||
Author | Jordi Roca; C. Alejandro Parraga; Maria Vanrell | ||||
Title | Categorical Focal Colours are Structurally Invariant Under Illuminant Changes | Type | Conference Article | ||
Year | 2011 | Publication | European Conference on Visual Perception | Abbreviated Journal | |
Volume | Issue | Pages | 196 | ||
Keywords | |||||
Abstract | The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Perception 40 | Abbreviated Series Title | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ RPV2011 | Serial | 1867 | ||
Permanent link to this record | |||||
Author | N. Serrano; L. Tarazon; D. Perez; Oriol Ramos Terrades; S. Juan | ||||
Title | The GIDOC Prototype | Type | Conference Article | ||
Year | 2010 | Publication | 10th International Workshop on Pattern Recognition in Information Systems | Abbreviated Journal | |
Volume | Issue | Pages | 82-89 | ||
Keywords | |||||
Abstract | Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription. GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions. |
||||
Address | Funchal, Portugal | ||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-989-8425-14-0 | Medium | ||
Area | Expedition | Conference | PRIS | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ STP2010 | Serial | 1868 | ||
Permanent link to this record | |||||
Author | S. Chanda; Umapada Pal; Oriol Ramos Terrades | ||||
Title | Word-Wise Thai and Roman Script Identification | Type | Journal | ||
Year | 2009 | Publication | ACM Transactions on Asian Language Information Processing | Abbreviated Journal | TALIP |
Volume | 8 | Issue | 3 | Pages | 1-21 |
Keywords | |||||
Abstract | In some Thai documents, a single text line of a printed document page may contain words of both Thai and Roman scripts. For the Optical Character Recognition (OCR) of such a document page it is better to identify, at first, Thai and Roman script portions and then to use individual OCR systems of the respective scripts on these identified portions. In this article, an SVM-based method is proposed for identification of word-wise printed Roman and Thai scripts from a single line of a document page. Here, at first, the document is segmented into lines and then lines are segmented into character groups (words). In the proposed scheme, we identify the script of a character group combining different character features obtained from structural shape, profile behavior, component overlapping information, topological properties, and water reservoir concept, etc. Based on the experiment on 10,000 data (words) we obtained 99.62% script identification accuracy from the proposed scheme. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1530-0226 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ CPR2009f | Serial | 1869 | ||
Permanent link to this record | |||||
Author | D. Perez; L. Tarazon; N. Serrano; F.M. Castro; Oriol Ramos Terrades; A. Juan | ||||
Title | The GERMANA Database | Type | Conference Article | ||
Year | 2009 | Publication | 10th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 301-305 | ||
Keywords | |||||
Abstract | A new handwritten text database, GERMANA, is presented to facilitate empirical comparison of different approaches to text line extraction and off-line handwriting recognition. GERMANA is the result of digitising and annotating a 764-page Spanish manuscript from 1891, in which most pages only contain nearly calligraphed text written on ruled sheets of well-separated lines. To our knowledge, it is the first publicly available database for handwriting research, mostly written in Spanish and comparable in size to standard databases. Due to its sequential book structure, it is also well-suited for realistic assessment of interactive handwriting recognition systems. To provide baseline results for reference in future studies, empirical results are also reported, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling. | ||||
Address | Barcelona; Spain | ||||
Corporate Author | Thesis | ||||
Publisher ![]() |
Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-5363 | ISBN | 978-1-4244-4500-4 | Medium | |
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ PTS2009 | Serial | 1870 | ||
Permanent link to this record |