toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Dimosthenis Karatzas; V. Poulain d'Andecy; Marçal Rusiñol edit   pdf
doi  openurl
  Title Human-Document Interaction – a new frontier for document image analysis Type Conference Article
  Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal  
  Volume Issue Pages 369-374  
  Keywords  
  Abstract (up) All indications show that paper documents will not cede in favour of their digital counterparts, but will instead be used increasingly in conjunction with digital information. An open challenge is how to seamlessly link the physical with the digital – how to continue taking advantage of the important affordances of paper, without missing out on digital functionality. This paper
presents the authors’ experience with developing systems for Human-Document Interaction based on augmented document interfaces and examines new challenges and opportunities arising for the document image analysis field in this area. The system presented combines state of the art camera-based document
image analysis techniques with a range of complementary tech-nologies to offer fluid Human-Document Interaction. Both fixed and nomadic setups are discussed that have gone through user testing in real-life environments, and use cases are presented that span the spectrum from business to educational application
 
  Address Santorini; Greece; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.084; 600.077 Approved no  
  Call Number KPR2016 Serial 2756  
Permanent link to this record
 

 
Author Lei Kang; Pau Riba; Yaxing Wang; Marçal Rusiñol; Alicia Fornes; Mauricio Villegas edit   pdf
openurl 
  Title GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images Type Conference Article
  Year 2020 Publication 16th European Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (up) Although current image generation methods have reached impressive quality levels, they are still unable to produce plausible yet diverse images of handwritten words. On the contrary, when writing by hand, a great variability is observed across different writers, and even when analyzing words scribbled by the same individual, involuntary variations are conspicuous. In this work, we take a step closer to producing realistic and varied artificially rendered handwritten words. We propose a novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content. Our generator is guided by three complementary learning objectives: to produce realistic images, to imitate a certain handwriting style and to convey a specific textual content. Our model is unconstrained to any predefined vocabulary, being able to render whatever input word. Given a sample writer, it is also able to mimic its calligraphic features in a few-shot setup. We significantly advance over prior art and demonstrate with qualitative, quantitative and human-based evaluations the realistic aspect of our synthetically produced images.  
  Address Virtual; August 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCV  
  Notes DAG; 600.140; 600.121; 600.129 Approved no  
  Call Number Admin @ si @ KPW2020 Serial 3426  
Permanent link to this record
 

 
Author Jaume Gibert; Ernest Valveny edit  doi
isbn  openurl
  Title Graph Embedding based on Nodes Attributes Representatives and a Graph of Words Representation. Type Conference Article
  Year 2010 Publication 13th International worshop on structural and syntactic pattern recognition and 8th international worshop on statistical pattern recognition Abbreviated Journal  
  Volume 6218 Issue Pages 223–232  
  Keywords  
  Abstract (up) Although graph embedding has recently been used to extend statistical pattern recognition techniques to the graph domain, some existing embeddings are usually computationally expensive as they rely on classical graph-based operations. In this paper we present a new way to embed graphs into vector spaces by first encapsulating the information stored in the original graph under another graph representation by clustering the attributes of the graphs to be processed. This new representation makes the association of graphs to vectors an easy step by just arranging both node attributes and the adjacency matrix in the form of vectors. To test our method, we use two different databases of graphs whose nodes attributes are of different nature. A comparison with a reference method permits to show that this new embedding is better in terms of classification rates, while being much more faster.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor In E.R. Hancock, R.C. Wilson, T. Windeatt, I. Ulusoy and F. Escolano,  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-14979-5 Medium  
  Area Expedition Conference S+SSPR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ GiV2010 Serial 1416  
Permanent link to this record
 

 
Author L.Tarazon; D. Perez; N. Serrano; V. Alabau; Oriol Ramos Terrades; A. Sanchis; A. Juan edit  doi
isbn  openurl
  Title Confidence Measures for Error Correction in Interactive Transcription of Handwritten Text Type Conference Article
  Year 2009 Publication 15th International Conference on Image Analysis and Processing Abbreviated Journal  
  Volume 5716 Issue Pages 567-574  
  Keywords  
  Abstract (up) An effective approach to transcribe old text documents is to follow an interactive-predictive paradigm in which both, the system is guided by the human supervisor, and the supervisor is assisted by the system to complete the transcription task as efficiently as possible. In this paper, we focus on a particular system prototype called GIDOC, which can be seen as a first attempt to provide user-friendly, integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. More specifically, we focus on the handwriting recognition part of GIDOC, for which we propose the use of confidence measures to guide the human supervisor in locating possible system errors and deciding how to proceed. Empirical results are reported on two datasets showing that a word error rate not larger than a 10% can be achieved by only checking the 32% of words that are recognised with less confidence.  
  Address Vietri sul Mare, Italy  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-04145-7 Medium  
  Area Expedition Conference ICIAP  
  Notes DAG Approved no  
  Call Number Admin @ si @ TPS2009 Serial 1871  
Permanent link to this record
 

 
Author Kaida Xiao; Chenyang Fu; Dimosthenis Karatzas; Sophie Wuerger edit  doi
openurl 
  Title Visual Gamma Correction for LCD Displays Type Journal Article
  Year 2011 Publication Displays Abbreviated Journal DIS  
  Volume 32 Issue 1 Pages 17-23  
  Keywords Display calibration; Psychophysics ; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration  
  Abstract (up) An improved method for visual gamma correction is developed for LCD displays to increase the accuracy of digital colour reproduction. Rather than utilising a photometric measurement device, we use observ- ers’ visual luminance judgements for gamma correction. Eight half tone patterns were designed to gen- erate relative luminances from 1/9 to 8/9 for each colour channel. A psychophysical experiment was conducted on an LCD display to find the digital signals corresponding to each relative luminance by visually matching the half-tone background to a uniform colour patch. Both inter- and intra-observer vari- ability for the eight luminance matches in each channel were assessed and the luminance matches proved to be consistent across observers (DE00 < 3.5) and repeatable (DE00 < 2.2). Based on the individual observer judgements, the display opto-electronic transfer function (OETF) was estimated by using either a 3rd order polynomial regression or linear interpolation for each colour channel. The performance of the proposed method is evaluated by predicting the CIE tristimulus values of a set of coloured patches (using the observer-based OETFs) and comparing them to the expected CIE tristimulus values (using the OETF obtained from spectro-radiometric luminance measurements). The resulting colour differences range from 2 to 4.6 DE00. We conclude that this observer-based method of visual gamma correction is useful to estimate the OETF for LCD displays. Its major advantage is that no particular functional relationship between digital inputs and luminance outputs has to be assumed.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ XFK2011 Serial 1815  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Joan Mas; Gemma Sanchez; Ernest Valveny edit   pdf
doi  isbn
openurl 
  Title Notation-invariant patch-based wall detector in architectural floor plans Type Book Chapter
  Year 2013 Publication Graphics Recognition. New Trends and Challenges Abbreviated Journal  
  Volume 7423 Issue Pages 79--88  
  Keywords  
  Abstract (up) Architectural floor plans exhibit a large variability in notation. Therefore, segmenting and identifying the elements of any kind of plan becomes a challenging task for approaches based on grouping structural primitives obtained by vectorization. Recently, a patch-based segmentation method working at pixel level and relying on the construction of a visual vocabulary has been proposed in [1], showing its adaptability to different notations by automatically learning the visual appearance of the elements in each different notation. This paper presents an evolution of that previous work, after analyzing and testing several alternatives for each of the different steps of the method: Firstly, an automatic plan-size normalization process is done. Secondly we evaluate different features to obtain the description of every patch. Thirdly, we train an SVM classifier to obtain the category of every patch instead of constructing a visual vocabulary. These variations of the method have been tested for wall detection on two datasets of architectural floor plans with different notations. After studying in deep each of the steps in the process pipeline, we are able to find the best system configuration, which highly outperforms the results on wall segmentation obtained by the original paper.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-36823-3 Medium  
  Area Expedition Conference  
  Notes DAG; 600.045; 600.056; 605.203 Approved no  
  Call Number Admin @ si @ HMS2013 Serial 2322  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Joan Mas; Gemma Sanchez; Ernest Valveny edit  openurl
  Title Descriptor-based Svm Wall Detector Type Conference Article
  Year 2011 Publication 9th International Workshop on Graphic Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (up) Architectural floorplans exhibit a large variability in notation. Therefore, segmenting and identifying the elements of any kind of plan becomes a challenging task for approaches based on grouping structural primitives obtained by vectorization. Recently, a patch-based segmentation method working at pixel level and relying on the construction of a visual vocabulary has been proposed showing its adaptability to different notations by automatically learning the visual appearance of the elements in each different notation. In this paper we describe an evolution of this new approach in two directions: firstly we evaluate different features to obtain the description of every patch. Secondly, we train an SVM classifier to obtain the category of every patch instead of constructing a visual vocabulary. These modifications of the method have been tested for wall detection on two datasets of architectural floorplans with different notations and compared with the results obtained with the original approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GREC  
  Notes DAG Approved no  
  Call Number Admin @ si @ HMS2011b Serial 1819  
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny edit   pdf
doi  openurl
  Title Visual attention models for scene text recognition Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (up) arXiv:1706.01487
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ GhV2017b Serial 3080  
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny edit   pdf
doi  openurl
  Title R-PHOC: Segmentation-Free Word Spotting using CNN Type Conference Article
  Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords Convolutional neural network; Image segmentation; Artificial neural network; Nearest neighbor search  
  Abstract (up) arXiv:1707.01294
This paper proposes a region based convolutional neural network for segmentation-free word spotting. Our network takes as input an image and a set of word candidate bound- ing boxes and embeds all bounding boxes into an embedding space, where word spotting can be casted as a simple nearest neighbour search between the query representation and each of the candidate bounding boxes. We make use of PHOC embedding as it has previously achieved significant success in segmentation- based word spotting. Word candidates are generated using a simple procedure based on grouping connected components using some spatial constraints. Experiments show that R-PHOC which operates on images directly can improve the current state-of- the-art in the standard GW dataset and performs as good as PHOCNET in some cases designed for segmentation based word spotting.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ GhV2017a Serial 3079  
Permanent link to this record
 

 
Author J. Chazalon; P. Gomez-Kramer; Jean-Christophe Burie; M.Coustaty; S.Eskenazi; Muhammad Muzzamil Luqman; N.Nayef; Marçal Rusiñol; N. Sidere; Jean-Marc Ogier edit   pdf
doi  openurl
  Title SmartDoc 2017 Video Capture: Mobile Document Acquisition in Video Mode Type Conference Article
  Year 2017 Publication 1st International Workshop on Open Services and Tools for Document Analysis Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (up) As mobile document acquisition using smartphones is getting more and more common, along with the continuous improvement of mobile devices (both in terms of computing power and image quality), we can wonder to which extent mobile phones can replace desktop scanners. Modern applications can cope with perspective distortion and normalize the contrast of a document page captured with a smartphone, and in some cases like bottle labels or posters, smartphones even have the advantage of allowing the acquisition of non-flat or large documents. However, several cases remain hard to handle, such as reflective documents (identity cards, badges, glossy magazine cover, etc.) or large documents for which some regions require an important amount of detail. This paper introduces the SmartDoc 2017 benchmark (named “SmartDoc Video Capture”), which aims at
assessing whether capturing documents using the video mode of a smartphone could solve those issues. The task under evaluation is both a stitching and a reconstruction problem, as the user can move the device over different parts of the document to capture details or try to erase highlights. The material released consists of a dataset, an evaluation method and the associated tool, a sample method, and the tools required to extend the dataset. All the components are released publicly under very permissive licenses, and we particularly cared about maximizing the ease of
understanding, usage and improvement.
 
  Address Kyoto; Japan; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR-OST  
  Notes DAG; 600.084; 600.121 Approved no  
  Call Number Admin @ si @ CGB2017 Serial 2997  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: