toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Subhajit Maity; Sanket Biswas; Siladittya Manna; Ayan Banerjee; Josep Llados; Saumik Bhattacharya; Umapada Pal edit   pdf
url  doi
openurl 
  Title (down) SelfDocSeg: A Self-Supervised vision-based Approach towards Document Segmentation Type Conference Article
  Year 2023 Publication 17th International Conference on Doccument Analysis and Recognition Abbreviated Journal  
  Volume 14187 Issue Pages 342–360  
  Keywords  
  Abstract Document layout analysis is a known problem to the documents research community and has been vastly explored yielding a multitude of solutions ranging from text mining, and recognition to graph-based representation, visual feature extraction, etc. However, most of the existing works have ignored the crucial fact regarding the scarcity of labeled data. With growing internet connectivity to personal life, an enormous amount of documents had been available in the public domain and thus making data annotation a tedious task. We address this challenge using self-supervision and unlike, the few existing self-supervised document segmentation approaches which use text mining and textual labels, we use a complete vision-based approach in pre-training without any ground-truth label or its derivative. Instead, we generate pseudo-layouts from the document images to pre-train an image encoder to learn the document object representation and localization in a self-supervised framework before fine-tuning it with an object detection model. We show that our pipeline sets a new benchmark in this context and performs at par with the existing methods and the supervised counterparts, if not outperforms. The code is made publicly available at: this https URL  
  Address Document Layout Analysis; Document  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG Approved no  
  Call Number Admin @ si @ MBM2023 Serial 3990  
Permanent link to this record
 

 
Author Lluis Gomez; Y. Patel; Marçal Rusiñol; C.V. Jawahar; Dimosthenis Karatzas edit   pdf
url  doi
openurl 
  Title (down) Self‐supervised learning of visual features through embedding images into text topic spaces Type Conference Article
  Year 2017 Publication 30th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.  
  Address Honolulu; Hawaii; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes DAG; 600.084; 600.121 Approved no  
  Call Number Admin @ si @ GPR2017 Serial 2889  
Permanent link to this record
 

 
Author Y. Patel; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas; C.V. Jawahar edit   pdf
url  doi
openurl 
  Title (down) Self-Supervised Visual Representations for Cross-Modal Retrieval Type Conference Article
  Year 2019 Publication ACM International Conference on Multimedia Retrieval Abbreviated Journal  
  Volume Issue Pages 182–186  
  Keywords  
  Abstract Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.  
  Address Otawa; Canada; june 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICMR  
  Notes DAG; 600.121; 600.129 Approved no  
  Call Number Admin @ si @ PGR2019 Serial 3288  
Permanent link to this record
 

 
Author Raul Gomez; Lluis Gomez; Jaume Gibert; Dimosthenis Karatzas edit   pdf
url  openurl
  Title (down) Self-Supervised Learning from Web Data for Multimodal Retrieval Type Book Chapter
  Year 2019 Publication Multi-Modal Scene Understanding Book Abbreviated Journal  
  Volume Issue Pages 279-306  
  Keywords self-supervised learning; webly supervised learning; text embeddings; multimodal retrieval; multimodal embedding  
  Abstract Self-Supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human annotated data. Web and Social Media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learnt joint image and text embeddingspace. Weperformathoroughanalysisandperformancecomparisonoffivedifferentstateof the art text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text basedimageretrievaltask,andweclearlyoutperformstateoftheartintheMIRFlickrdatasetwhen training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.129; 601.338; 601.310 Approved no  
  Call Number Admin @ si @ GGG2019 Serial 3266  
Permanent link to this record
 

 
Author Raul Gomez; Ali Furkan Biten; Lluis Gomez; Jaume Gibert; Marçal Rusiñol; Dimosthenis Karatzas edit   pdf
url  doi
openurl 
  Title (down) Selective Style Transfer for Text Type Conference Article
  Year 2019 Publication 15th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 805-812  
  Keywords transfer; text style transfer; data augmentation; scene text detection  
  Abstract This paper explores the possibilities of image style transfer applied to text maintaining the original transcriptions. Results on different text domains (scene text, machine printed text and handwritten text) and cross-modal results demonstrate that this is feasible, and open different research lines. Furthermore, two architectures for selective style transfer, which means
transferring style to only desired image pixels, are proposed. Finally, scene text selective style transfer is evaluated as a data augmentation technique to expand scene text detection datasets, resulting in a boost of text detectors performance. Our implementation of the described models is publicly available.
 
  Address Sydney; Australia; September 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.129; 600.135; 601.338; 601.310; 600.121 Approved no  
  Call Number GBG2019 Serial 3265  
Permanent link to this record
 

 
Author Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny edit  doi
openurl 
  Title (down) Segmentation-free Word Spotting with Exemplar SVMs Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 12 Pages 3967–3978  
  Keywords Word spotting; Segmentation-free; Unsupervised learning; Reranking; Query expansion; Compression  
  Abstract In this paper we propose an unsupervised segmentation-free method for word spotting in document images. Documents are represented with a grid of HOG descriptors, and a sliding-window approach is used to locate the document regions that are most similar to the query. We use the Exemplar SVM framework to produce a better representation of the query in an unsupervised way. Then, we use a more discriminative representation based on Fisher Vector to rerank the best regions retrieved, and the most promising ones are used to expand the Exemplar SVM training set and improve the query representation. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.045; 600.056; 600.061; 602.006; 600.077 Approved no  
  Call Number Admin @ si @ AGF2014b Serial 2485  
Permanent link to this record
 

 
Author Pau Torras; Mohamed Ali Souibgui; Sanket Biswas; Alicia Fornes edit  url
openurl 
  Title (down) Segmentation-Free Alignment of Arbitrary Symbol Transcripts to Images Type Conference Article
  Year 2023 Publication Document Analysis and Recognition – ICDAR 2023 Workshops Abbreviated Journal  
  Volume 14193 Issue Pages 83-93  
  Keywords Historical Manuscripts; Symbol Alignment  
  Abstract Developing arbitrary symbol recognition systems is a challenging endeavour. Even using content-agnostic architectures such as few-shot models, performance can be substantially improved by providing a number of well-annotated examples into training. In some contexts, transcripts of the symbols are available without any position information associated to them, which enables using line-level recognition architectures. A way of providing this position information to detection-based architectures is finding systems that can align the input symbols with the transcription. In this paper we discuss some symbol alignment techniques that are suitable for low-data scenarios and provide an insight on their perceived strengths and weaknesses. In particular, we study the usage of Connectionist Temporal Classification models, Attention-Based Sequence to Sequence models and we compare them with the results obtained on a few-shot recognition system.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG Approved no  
  Call Number Admin @ si @ TSS2023 Serial 3850  
Permanent link to this record
 

 
Author Dimosthenis Karatzas; Marçal Rusiñol; Coen Antens; Miquel Ferrer edit  doi
openurl 
  Title (down) Segmentation Robust to the Vignette Effect for Machine Vision Systems Type Conference Article
  Year 2008 Publication 19th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The vignette effect (radial fall-off) is commonly encountered in images obtained through certain image acquisition setups and can seriously hinder automatic analysis processes. In this paper we present a fast and efficient method for dealing with vignetting in the context of object segmentation in an existing industrial inspection setup. The vignette effect is modelled here as a circular, non-linear gradient. The method estimates the gradient parameters and employs them to perform segmentation. Segmentation results on a variety of images indicate that the presented method is able to successfully tackle the vignette effect.  
  Address Tampa, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ KRA2008 Serial 1065  
Permanent link to this record
 

 
Author Gemma Sanchez; Josep Llados; Enric Marti edit  url
openurl 
  Title (down) Segmentation and analysis of linial texture in plans Type Conference Article
  Year 1997 Publication Intelligence Artificielle et Complexité. Abbreviated Journal  
  Volume Issue Pages  
  Keywords Structural Texture, Voronoi, Hierarchical Clustering, String Matching.  
  Abstract The problem of texture segmentation and interpretation is one of the main concerns in the field of document analysis. Graphical documents often contain areas characterized by a structural texture whose recognition allows both the document understanding, and its storage in a more compact way. In this work, we focus on structural linial textures of regular repetition contained in plan documents. Starting from an atributed graph which represents the vectorized input image, we develop a method to segment textured areas and recognize their placement rules. We wish to emphasize that the searched textures do not follow a predefined pattern. Minimal closed loops of the input graph are computed, and then hierarchically clustered. In this hierarchical clustering, a distance function between two closed loops is defined in terms of their areas difference and boundary resemblance computed by a string matching procedure. Finally it is noted that, when the texture consists of isolated primitive elements, the same method can be used after computing a Voronoi Tesselation of the input graph.  
  Address Paris, France  
  Corporate Author Thesis  
  Publisher Place of Publication Paris Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference AERFAI  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ SLM1997 Serial 1649  
Permanent link to this record
 

 
Author Partha Pratim Roy; Umapada Pal; Josep Llados edit  url
doi  openurl
  Title (down) Seal Object Detection in Document Images using GHT of Local Component Shapes Type Conference Article
  Year 2010 Publication 10th ACM Symposium On Applied Computing Abbreviated Journal  
  Volume Issue Pages 23–27  
  Keywords  
  Abstract Due to noise, overlapped text/signature and multi-oriented nature, seal (stamp) object detection involves a difficult challenge. This paper deals with automatic detection of seal from documents with cluttered background. Here, a seal object is characterized by scale and rotation invariant spatial feature descriptors (distance and angular position) computed from recognition result of individual connected components (characters). Recognition of multi-scale and multi-oriented component is done using Support Vector Machine classifier. Generalized Hough Transform (GHT) is used to detect the seal and a voting is casted for finding possible location of the seal object in a document based on these spatial feature descriptor of components pairs. The peak of votes in GHT accumulator validates the hypothesis to locate the seal object in a document. Experimental results show that, the method is efficient to locate seal instance of arbitrary shape and orientation in documents.  
  Address Sierre, Switzerland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SAC  
  Notes DAG Approved no  
  Call Number DAG @ dag @ RPL2010a Serial 1291  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: