|
Records |
Links |
|
Author |
Juan Ignacio Toledo; Manuel Carbonell; Alicia Fornes; Josep Llados |
|
|
Title |
Information Extraction from Historical Handwritten Document Images with a Context-aware Neural Model |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
86 |
Issue |
|
Pages |
27-36 |
|
|
Keywords |
Document image analysis; Handwritten documents; Named entity recognition; Deep neural networks |
|
|
Abstract |
Many historical manuscripts that hold trustworthy memories of the past societies contain information organized in a structured layout (e.g. census, birth or marriage records). The precious information stored in these documents cannot be effectively used nor accessed without costly annotation efforts. The transcription driven by the semantic categories of words is crucial for the subsequent access. In this paper we describe an approach to extract information from structured historical handwritten text images and build a knowledge representation for the extraction of meaning out of historical data. The method extracts information, such as named entities, without the need of an intermediate transcription step, thanks to the incorporation of context information through language models. Our system has two variants, the first one is based on bigrams, whereas the second one is based on recurrent neural networks. Concretely, our second architecture integrates a Convolutional Neural Network to model visual information from word images together with a Bidirecitonal Long Short Term Memory network to model the relation among the words. This integrated sequential approach is able to extract more information than just the semantic category (e.g. a semantic category can be associated to a person in a record). Our system is generic, it deals with out-of-vocabulary words by design, and it can be applied to structured handwritten texts from different domains. The method has been validated with the ICDAR IEHHR competition protocol, outperforming the existing approaches. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.097; 601.311; 603.057; 600.084; 600.140; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ TCF2019 |
Serial |
3166 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Ignacio Toledo; Sebastian Sudholt; Alicia Fornes; Jordi Cucurull; A. Fink; Josep Llados |
|
|
Title |
Handwritten Word Image Categorization with Convolutional Neural Networks and Spatial Pyramid Pooling |
Type |
Conference Article |
|
Year |
2016 |
Publication |
Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) |
Abbreviated Journal |
|
|
|
Volume |
10029 |
Issue |
|
Pages |
543-552 |
|
|
Keywords |
Document image analysis; Word image categorization; Convolutional neural networks; Named entity detection |
|
|
Abstract |
The extraction of relevant information from historical document collections is one of the key steps in order to make these documents available for access and searches. The usual approach combines transcription and grammars in order to extract semantically meaningful entities. In this paper, we describe a new method to obtain word categories directly from non-preprocessed handwritten word images. The method can be used to directly extract information, being an alternative to the transcription. Thus it can be used as a first step in any kind of syntactical analysis. The approach is based on Convolutional Neural Networks with a Spatial Pyramid Pooling layer to deal with the different shapes of the input images. We performed the experiments on a historical marriage record dataset, obtaining promising results. |
|
|
Address |
Merida; Mexico; December 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-3-319-49054-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
S+SSPR |
|
|
Notes |
DAG; 600.097; 602.006 |
Approved |
no |
|
|
Call Number |
Admin @ si @ TSF2016 |
Serial |
2877 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Ignacio Toledo; Sounak Dey; Alicia Fornes; Josep Llados |
|
|
Title |
Handwriting Recognition by Attribute embedding and Recurrent Neural Networks |
Type |
Conference Article |
|
Year |
2017 |
Publication |
14th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1038-1043 |
|
|
Keywords |
|
|
|
Abstract |
Handwriting recognition consists in obtaining the transcription of a text image. Recent word spotting methods based on attribute embedding have shown good performance when recognizing words. However, they are holistic methods in the sense that they recognize the word as a whole (i.e. they find the closest word in the lexicon to the word image). Consequently,
these kinds of approaches are not able to deal with out of vocabulary words, which are common in historical manuscripts. Also, they cannot be extended to recognize text lines. In order to address these issues, in this paper we propose a handwriting recognition method that adapts the attribute embedding to sequence learning. Concretely, the method learns the attribute embedding of patches of word images with a convolutional neural network. Then, these embeddings are presented as a sequence to a recurrent neural network that produces the transcription. We obtain promising results even without the use of any kind of dictionary or language model |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG; 600.097; 601.225; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ TDF2017 |
Serial |
3055 |
|
Permanent link to this record |
|
|
|
|
Author |
Kaida Xiao; Chenyang Fu; D.Mylonas; Dimosthenis Karatzas; S. Wuerger |
|
|
Title |
Unique Hue Data for Colour Appearance Models. Part ii: Chromatic Adaptation Transform |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Color Research & Application |
Abbreviated Journal |
CRA |
|
|
Volume |
38 |
Issue |
1 |
Pages |
22-29 |
|
|
Keywords |
|
|
|
Abstract |
Unique hue settings of 185 observers under three room-lighting conditions were used to evaluate the accuracy of full and mixed chromatic adaptation transform models of CIECAM02 in terms of unique hue reproduction. Perceptual hue shifts in CIECAM02 were evaluated for both models with no clear difference using the current Commission Internationale de l'Éclairage (CIE) recommendation for mixed chromatic adaptation ratio. Using our large dataset of unique hue data as a benchmark, an optimised parameter is proposed for chromatic adaptation under mixed illumination conditions that produces more accurate results in unique hue reproduction. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2013 |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ XFM2013 |
Serial |
1822 |
|
Permanent link to this record |
|
|
|
|
Author |
Kaida Xiao; Chenyang Fu; Dimosthenis Karatzas; Sophie Wuerger |
|
|
Title |
Visual Gamma Correction for LCD Displays |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Displays |
Abbreviated Journal |
DIS |
|
|
Volume |
32 |
Issue |
1 |
Pages |
17-23 |
|
|
Keywords |
Display calibration; Psychophysics ; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration |
|
|
Abstract |
An improved method for visual gamma correction is developed for LCD displays to increase the accuracy of digital colour reproduction. Rather than utilising a photometric measurement device, we use observ- ers’ visual luminance judgements for gamma correction. Eight half tone patterns were designed to gen- erate relative luminances from 1/9 to 8/9 for each colour channel. A psychophysical experiment was conducted on an LCD display to find the digital signals corresponding to each relative luminance by visually matching the half-tone background to a uniform colour patch. Both inter- and intra-observer vari- ability for the eight luminance matches in each channel were assessed and the luminance matches proved to be consistent across observers (DE00 < 3.5) and repeatable (DE00 < 2.2). Based on the individual observer judgements, the display opto-electronic transfer function (OETF) was estimated by using either a 3rd order polynomial regression or linear interpolation for each colour channel. The performance of the proposed method is evaluated by predicting the CIE tristimulus values of a set of coloured patches (using the observer-based OETFs) and comparing them to the expected CIE tristimulus values (using the OETF obtained from spectro-radiometric luminance measurements). The resulting colour differences range from 2 to 4.6 DE00. We conclude that this observer-based method of visual gamma correction is useful to estimate the OETF for LCD displays. Its major advantage is that no particular functional relationship between digital inputs and luminance outputs has to be assumed. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ XFK2011 |
Serial |
1815 |
|
Permanent link to this record |
|
|
|
|
Author |
Kaida Xiao; Sophie Wuerger; Chenyang Fu; Dimosthenis Karatzas |
|
|
Title |
Unique Hue Data for Colour Appearance Models. Part i: Loci of Unique Hues and Hue Uniformity |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Color Research & Application |
Abbreviated Journal |
CRA |
|
|
Volume |
36 |
Issue |
5 |
Pages |
316-323 |
|
|
Keywords |
unique hues; colour appearance models; CIECAM02; hue uniformity |
|
|
Abstract |
Psychophysical experiments were conducted to assess unique hues on a CRT display for a large sample of colour-normal observers (n 1⁄4 185). These data were then used to evaluate the most commonly used colour appear- ance model, CIECAM02, by transforming the CIEXYZ tris- timulus values of the unique hues to the CIECAM02 colour appearance attributes, lightness, chroma and hue angle. We report two findings: (1) the hue angles derived from our unique hue data are inconsistent with the commonly used Natural Color System hues that are incorporated in the CIECAM02 model. We argue that our predicted unique hue angles (derived from our large dataset) provide a more reliable standard for colour management applications when the precise specification of these salient colours is im- portant. (2) We test hue uniformity for CIECAM02 in all four unique hues and show significant disagreements for all hues, except for unique red which seems to be invariant under lightness changes. Our dataset is useful to improve the CIECAM02 model as it provides reliable data for benchmarking. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Wiley Periodicals Inc |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ XWF2011 |
Serial |
1816 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Marçal Rusiñol; Francesc J. Ferri |
|
|
Title |
Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
60 |
Issue |
4 |
Pages |
512-524 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies
have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.086; 600.130; 600.121; 600.118; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMH2018a |
Serial |
3062 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Marçal Rusiñol; Aura Hernandez-Sabate |
|
|
Title |
Feature Extraction by Using Dual-Generalized Discriminative Common Vectors |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
61 |
Issue |
3 |
Pages |
331-351 |
|
|
Keywords |
Online feature extraction; Generalized discriminative common vectors; Dual learning; Incremental learning; Decremental learning |
|
|
Abstract |
In this paper, a dual online subspace-based learning method called dual-generalized discriminative common vectors (Dual-GDCV) is presented. The method extends incremental GDCV by exploiting simultaneously both the concepts of incremental and decremental learning for supervised feature extraction and classification. Our methodology is able to update the feature representation space without recalculating the full projection or accessing the previously processed training data. It allows both adding information and removing unnecessary data from a knowledge base in an efficient way, while retaining the previously acquired knowledge. The proposed method has been theoretically proved and empirically validated in six standard face recognition and classification datasets, under two scenarios: (1) removing and adding samples of existent classes, and (2) removing and adding new classes to a classification problem. Results show a considerable computational gain without compromising the accuracy of the model in comparison with both batch methodologies and other state-of-art adaptive methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.084; 600.118; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DRR2019 |
Serial |
3172 |
|
Permanent link to this record |
|
|
|
|
Author |
Khanh Nguyen; Ali Furkan Biten; Andres Mafla; Lluis Gomez; Dimosthenis Karatzas |
|
|
Title |
Show, Interpret and Tell: Entity-Aware Contextualised Image Captioning in Wikipedia |
Type |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the 37th AAAI Conference on Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
37 |
Issue |
2 |
Pages |
1940-1948 |
|
|
Keywords |
|
|
|
Abstract |
Humans exploit prior knowledge to describe images, and are able to adapt their explanation to specific contextual information given, even to the extent of inventing plausible explanations when contextual information and images do not match. In this work, we propose the novel task of captioning Wikipedia images by integrating contextual knowledge. Specifically, we produce models that jointly reason over Wikipedia articles, Wikimedia images and their associated descriptions to produce contextualized captions. The same Wikimedia image can be used to illustrate different articles, and the produced caption needs to be adapted to the specific context allowing us to explore the limits of the model to adjust captions to different contextual information. Dealing with out-of-dictionary words and Named Entities is a challenging task in this domain. To address this, we propose a pre-training objective, Masked Named Entity Modeling (MNEM), and show that this pretext task results to significantly improved models. Furthermore, we verify that a model pre-trained in Wikipedia generalizes well to News Captioning datasets. We further define two different test splits according to the difficulty of the captioning task. We offer insights on the role and the importance of each modality and highlight the limitations of our model. |
|
|
Address |
Washington; USA; February 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
AAAI |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ NBM2023 |
Serial |
3860 |
|
Permanent link to this record |
|
|
|
|
Author |
Klara Janousckova; Jiri Matas; Lluis Gomez; Dimosthenis Karatzas |
|
|
Title |
Text Recognition – Real World Data and Where to Find Them |
Type |
Conference Article |
|
Year |
2020 |
Publication |
25th International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
4489-4496 |
|
|
Keywords |
|
|
|
Abstract |
We present a method for exploiting weakly annotated images to improve text extraction pipelines. The approach uses an arbitrary end-to-end text recognition system to obtain text region proposals and their, possibly erroneous, transcriptions. The method includes matching of imprecise transcriptions to weak annotations and an edit distance guided neighbourhood search. It produces nearly error-free, localised instances of scene text, which we treat as “pseudo ground truth” (PGT). The method is applied to two weakly-annotated datasets. Training with the extracted PGT consistently improves the accuracy of a state of the art recognition model, by 3.7% on average, across different benchmark datasets (image domains) and 24.5% on one of the weakly annotated datasets 1 1 Acknowledgements. The authors were supported by Czech Technical University student grant SGS20/171/0HK3/3TJ13, the MEYS VVV project CZ.02.1.01/0.010.0J16 019/0000765 Research Center for Informatics, the Spanish Research project TIN2017-89779-P and the CERCA Programme / Generalitat de Catalunya. |
|
|
Address |
Virtual; January 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
DAG; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ JMG2020 |
Serial |
3557 |
|
Permanent link to this record |