toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Arnau Baro; Jialuo Chen; Alicia Fornes; Beata Megyesi edit   pdf
doi  openurl
  Title Towards a generic unsupervised method for transcription of encoded manuscripts Type Conference Article
  Year 2019 Publication 3rd International Conference on Digital Access to Textual Cultural Heritage Abbreviated Journal  
  Volume Issue Pages 73-78  
  Keywords A. Baró, J. Chen, A. Fornés, B. Megyesi.  
  Abstract (up) Historical ciphers, a special type of manuscripts, contain encrypted information, important for the interpretation of our history. The first step towards decipherment is to transcribe the images, either manually or by automatic image processing techniques. Despite the improvements in handwritten text recognition (HTR) thanks to deep learning methodologies, the need of labelled data to train is an important limitation. Given that ciphers often use symbol sets across various alphabets and unique symbols without any transcription scheme available, these supervised HTR techniques are not suitable to transcribe ciphers. In this paper we propose an un-supervised method for transcribing encrypted manuscripts based on clustering and label propagation, which has been successfully applied to community detection in networks. We analyze the performance on ciphers with various symbol sets, and discuss the advantages and drawbacks compared to supervised HTR methods.  
  Address Brussels; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DATeCH  
  Notes DAG; 600.097; 600.140; 600.121 Approved no  
  Call Number Admin @ si @ BCF2019 Serial 3276  
Permanent link to this record
 

 
Author Beata Megyesi; Alicia Fornes; Nils Kopal; Benedek Lang edit  url
openurl 
  Title Historical Cryptology Type Book Chapter
  Year 2024 Publication Learning and Experiencing Cryptography with CrypTool and SageMath Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (up) Historical cryptology studies (original) encrypted manuscripts, often handwritten sources, produced in our history. These historical sources can be found in archives, often hidden without any indexing and therefore hard to locate. Once found they need to be digitized and turned into a machine-readable text format before they can be deciphered with computational methods. The focus of historical cryptology is not primarily the development of sophisticated algorithms for decipherment, but rather the entire process of analysis of the encrypted source from collection and digitization to transcription and decryption. The process also includes the interpretation and contextualization of the message set in its historical context. There are many challenges on the way, such as mistakes made by the scribe, errors made by the transcriber, damaged pages, handwriting styles that are difficult to interpret, historical languages from various time periods, and hidden underlying language of the message. Ciphertexts vary greatly in terms of their code system and symbol sets used with more or less distinguishable symbols. Ciphertexts can be embedded in clearly written text, or shorter or longer sequences of cleartext can be embedded in the ciphertext. The ciphers used mostly in historical times are substitutions (simple, homophonic, or polyphonic), with or without nomenclatures, encoded as digits or symbol sequences, with or without spaces. So the circumstances are different from those in modern cryptography which focuses on methods (algorithms) and their strengths and assumes that the algorithm is applied correctly. For both historical and modern cryptology, attack vectors outside the algorithm are applied like implementation flaws and side-channel attacks. In this chapter, we give an introduction to the field of historical cryptology and present an overview of how researchers today process historical encrypted sources.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ MFK2024 Serial 4020  
Permanent link to this record
 

 
Author Alicia Fornes; Xavier Otazu; Josep Llados edit   pdf
doi  openurl
  Title Show through cancellation and image enhancement by multiresolution contrast processing Type Conference Article
  Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 200-204  
  Keywords  
  Abstract (up) Historical documents suffer from different types of degradation and noise such as background variation, uneven illumination or dark spots. In case of double-sided documents, another common problem is that the back side of the document usually interferes with the front side because of the transparency of the document or ink bleeding. This effect is called the show through phenomenon. Many methods are developed to solve these problems, and in the case of show-through, by scanning and matching both the front and back sides of the document. In contrast, our approach is designed to use only one side of the scanned document. We hypothesize that show-trough are low contrast components, while foreground components are high contrast ones. A Multiresolution Contrast (MC) decomposition is presented in order to estimate the contrast of features at different spatial scales. We cancel the show-through phenomenon by thresholding these low contrast components. This decomposition is also able to enhance the image removing shadowed areas by weighting spatial scales. Results show that the enhanced images improve the readability of the documents, allowing scholars both to recover unreadable words and to solve ambiguities.  
  Address Washington; USA; August 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-5363 ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 602.006; 600.045; 600.061; 600.052;CIC Approved no  
  Call Number Admin @ si @ FOL2013 Serial 2241  
Permanent link to this record
 

 
Author Ekta Vats; Anders Hast; Alicia Fornes edit   pdf
url  doi
openurl 
  Title Training-Free and Segmentation-Free Word Spotting using Feature Matching and Query Expansion Type Conference Article
  Year 2019 Publication 15th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 1294-1299  
  Keywords Word spotting; Segmentation-free; Trainingfree; Query expansion; Feature matching  
  Abstract (up) Historical handwritten text recognition is an interesting yet challenging problem. In recent times, deep learning based methods have achieved significant performance in handwritten text recognition. However, handwriting recognition using deep learning needs training data, and often, text must be previously segmented into lines (or even words). These limitations constrain the application of HTR techniques in document collections, because training data or segmented words are not always available. Therefore, this paper proposes a training-free and segmentation-free word spotting approach that can be applied in unconstrained scenarios. The proposed word spotting framework is based on document query word expansion and relaxed feature matching algorithm, which can easily be parallelised. Since handwritten words posses distinct shape and characteristics, this work uses a combination of different keypoint detectors
and Fourier-based descriptors to obtain a sufficient degree of relaxed matching. The effectiveness of the proposed method is empirically evaluated on well-known benchmark datasets using standard evaluation measures. The use of informative features along with query expansion significantly contributed in efficient performance of the proposed method.
 
  Address Sydney; Australia; September 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.140; 600.121 Approved no  
  Call Number Admin @ si @ VHF2019 Serial 3356  
Permanent link to this record
 

 
Author Veronica Romero; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez edit   pdf
isbn  openurl
  Title Information Extraction in Handwritten Marriage Licenses Books Using the MGGI Methodology Type Conference Article
  Year 2017 Publication 8th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 10255 Issue Pages 287-294  
  Keywords Handwritten Text Recognition; Information extraction; Language modeling; MGGI; Categories-based language model  
  Abstract (up) Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demographic and genealogical research. For example, marriage license books have been used for centuries by ecclesiastical and secular institutions to register marriages. These books follow a simple structure of the text in the records with a evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. In previous works we studied the use of category-based language models and how a Grammatical Inference technique known as MGGI could improve the accuracy of these tasks. In this work we analyze the main causes of the semantic errors observed in previous results and apply a better implementation of the MGGI technique to solve these problems. Using the resulting language model, transcription and information extraction experiments have been carried out, and the results support our proposed approach.  
  Address Faro; Portugal; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor L.A. Alexandre; J.Salvador Sanchez; Joao M. F. Rodriguez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-319-58837-7 Medium  
  Area Expedition Conference IbPRIA  
  Notes DAG; 602.006; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ RFV2017 Serial 2952  
Permanent link to this record
 

 
Author Veronica Romero; Alicia Fornes; Nicolas Serrano; Joan Andreu Sanchez; A.H. Toselli; Volkmar Frinken; E. Vidal; Josep Llados edit   pdf
doi  openurl
  Title The ESPOSALLES database: An ancient marriage license corpus for off-line handwriting recognition Type Journal Article
  Year 2013 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 46 Issue 6 Pages 1658-1669  
  Keywords  
  Abstract (up) Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demography studies and genealogical research. Automatic processing of historical documents, however, has mostly been focused on single works of literature and less on social records, which tend to have a distinct layout, structure, and vocabulary. Such information is usually collected by expert demographers that devote a lot of time to manually transcribe them. This paper presents a new database, compiled from a marriage license books collection, to support research in automatic handwriting recognition for historical documents containing social records. Marriage license books are documents that were used for centuries by ecclesiastical institutions to register marriage licenses. Books from this collection are handwritten and span nearly half a millennium until the beginning of the 20th century. In addition, a study is presented about the capability of state-of-the-art handwritten text recognition systems, when applied to the presented database. Baseline results are reported for reference in future studies.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science Inc. New York, NY, USA Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0031-3203 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.045; 602.006; 605.203 Approved no  
  Call Number Admin @ si @ RFS2013 Serial 2298  
Permanent link to this record
 

 
Author Christophe Rigaud; Dimosthenis Karatzas; Jean-Christophe Burie; Jean-Marc Ogier edit  doi
isbn  openurl
  Title Color descriptor for content-based drawing retrieval Type Conference Article
  Year 2014 Publication 11th IAPR International Workshop on Document Analysis and Systems Abbreviated Journal  
  Volume Issue Pages 267 - 271  
  Keywords  
  Abstract (up) Human detection in computer vision field is an active field of research. Extending this to human-like drawings such as the main characters in comic book stories is not trivial. Comics analysis is a very recent field of research at the intersection of graphics, texts, objects and people recognition. The detection of the main comic characters is an essential step towards a fully automatic comic book understanding. This paper presents a color-based approach for comics character retrieval using content-based drawing retrieval and color palette.  
  Address Tours; Francia; April 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4799-3243-6 Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.056; 600.077 Approved no  
  Call Number Admin @ si @ RKB2014 Serial 2479  
Permanent link to this record
 

 
Author Khanh Nguyen; Ali Furkan Biten; Andres Mafla; Lluis Gomez; Dimosthenis Karatzas edit  url
openurl 
  Title Show, Interpret and Tell: Entity-Aware Contextualised Image Captioning in Wikipedia Type Conference Article
  Year 2023 Publication Proceedings of the 37th AAAI Conference on Artificial Intelligence Abbreviated Journal  
  Volume 37 Issue 2 Pages 1940-1948  
  Keywords  
  Abstract (up) Humans exploit prior knowledge to describe images, and are able to adapt their explanation to specific contextual information given, even to the extent of inventing plausible explanations when contextual information and images do not match. In this work, we propose the novel task of captioning Wikipedia images by integrating contextual knowledge. Specifically, we produce models that jointly reason over Wikipedia articles, Wikimedia images and their associated descriptions to produce contextualized captions. The same Wikimedia image can be used to illustrate different articles, and the produced caption needs to be adapted to the specific context allowing us to explore the limits of the model to adjust captions to different contextual information. Dealing with out-of-dictionary words and Named Entities is a challenging task in this domain. To address this, we propose a pre-training objective, Masked Named Entity Modeling (MNEM), and show that this pretext task results to significantly improved models. Furthermore, we verify that a model pre-trained in Wikipedia generalizes well to News Captioning datasets. We further define two different test splits according to the difficulty of the captioning task. We offer insights on the role and the importance of each modality and highlight the limitations of our model.  
  Address Washington; USA; February 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference AAAI  
  Notes DAG Approved no  
  Call Number Admin @ si @ NBM2023 Serial 3860  
Permanent link to this record
 

 
Author ChunYang; Xu Cheng Yin; Hong Yu; Dimosthenis Karatzas; Yu Cao edit  doi
isbn  openurl
  Title ICDAR2017 Robust Reading Challenge on Text Extraction from Biomedical Literature Figures (DeTEXT) Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 1444-1447  
  Keywords  
  Abstract (up) Hundreds of millions of figures are available in the biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information and understanding biomedical documents. Unlike images in the open domain, biomedical figures present a variety of unique challenges. For example, biomedical figures typically have complex layouts, small font sizes, short text, specific text, complex symbols and irregular text arrangements. This paper presents the final results of the ICDAR 2017 Competition on Text Extraction from Biomedical Literature Figures (ICDAR2017 DeTEXT Competition), which aims at extracting (detecting and recognizing) text from biomedical literature figures. Similar to text extraction from scene images and web pictures, ICDAR2017 DeTEXT Competition includes three major tasks, i.e., text detection, cropped word recognition and end-to-end text recognition. Here, we describe in detail the data set, tasks, evaluation protocols and participants of this competition, and report the performance of the participating methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-5386-3586-5 Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ YCY2017 Serial 3098  
Permanent link to this record
 

 
Author Antonio Lopez; Ernest Valveny; Juan J. Villanueva edit  url
openurl 
  Title Real-time quality control of surgical material packaging by artificial vision Type Journal Article
  Year 2005 Publication Assembly Automation Abbreviated Journal  
  Volume 25 Issue 3 Pages  
  Keywords  
  Abstract (up) IF: 0.061)  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;DAG Approved no  
  Call Number ADAS @ adas @ LVV2005 Serial 552  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: