|
Anjan Dutta and Zeynep Akata. 2019. Semantically Tied Paired Cycle Consistency for Zero-Shot Sketch-based Image Retrieval. 32nd IEEE Conference on Computer Vision and Pattern Recognition.5089–5098.
Abstract: Zero-shot sketch-based image retrieval (SBIR) is an emerging task in computer vision, allowing to retrieve natural images relevant to sketch queries that might not been seen in the training phase. Existing works either require aligned sketch-image pairs or inefficient memory fusion layer for mapping the visual information to a semantic space. In this work, we propose a semantically aligned paired cycle-consistent generative (SEM-PCYC) model for zero-shot SBIR, where each branch maps the visual information to a common semantic space via an adversarial training. Each of these branches maintains a cycle consistency that only requires supervision at category levels, and avoids the need of highly-priced aligned sketch-image pairs. A classification criteria on the generators' outputs ensures the visual to semantic space mapping to be discriminating. Furthermore, we propose to combine textual and hierarchical side information via a feature selection auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in zero-shot SBIR performance over the state-of-the-art on the challenging Sketchy and TU-Berlin datasets.
|
|
|
Subhajit Maity and 6 others. 2023. SelfDocSeg: A Self-Supervised vision-based Approach towards Document Segmentation. 17th International Conference on Doccument Analysis and Recognition.342–360.
Abstract: Document layout analysis is a known problem to the documents research community and has been vastly explored yielding a multitude of solutions ranging from text mining, and recognition to graph-based representation, visual feature extraction, etc. However, most of the existing works have ignored the crucial fact regarding the scarcity of labeled data. With growing internet connectivity to personal life, an enormous amount of documents had been available in the public domain and thus making data annotation a tedious task. We address this challenge using self-supervision and unlike, the few existing self-supervised document segmentation approaches which use text mining and textual labels, we use a complete vision-based approach in pre-training without any ground-truth label or its derivative. Instead, we generate pseudo-layouts from the document images to pre-train an image encoder to learn the document object representation and localization in a self-supervised framework before fine-tuning it with an object detection model. We show that our pipeline sets a new benchmark in this context and performs at par with the existing methods and the supervised counterparts, if not outperforms. The code is made publicly available at: this https URL
|
|
|
Lluis Gomez, Y. Patel, Marçal Rusiñol, C.V. Jawahar and Dimosthenis Karatzas. 2017. Self‐supervised learning of visual features through embedding images into text topic spaces. 30th IEEE Conference on Computer Vision and Pattern Recognition.
Abstract: End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.
|
|
|
Y. Patel, Lluis Gomez, Marçal Rusiñol, Dimosthenis Karatzas and C.V. Jawahar. 2019. Self-Supervised Visual Representations for Cross-Modal Retrieval. ACM International Conference on Multimedia Retrieval.182–186.
Abstract: Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.
|
|
|
Raul Gomez, Lluis Gomez, Jaume Gibert and Dimosthenis Karatzas. 2019. Self-Supervised Learning from Web Data for Multimodal Retrieval. Multi-Modal Scene Understanding Book.279–306.
Abstract: Self-Supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human annotated data. Web and Social Media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learnt joint image and text embeddingspace. Weperformathoroughanalysisandperformancecomparisonoffivedifferentstateof the art text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text basedimageretrievaltask,andweclearlyoutperformstateoftheartintheMIRFlickrdatasetwhen training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.
Keywords: self-supervised learning; webly supervised learning; text embeddings; multimodal retrieval; multimodal embedding
|
|
|
Raul Gomez, Ali Furkan Biten, Lluis Gomez, Jaume Gibert, Marçal Rusiñol and Dimosthenis Karatzas. 2019. Selective Style Transfer for Text. 15th International Conference on Document Analysis and Recognition.805–812.
Abstract: This paper explores the possibilities of image style transfer applied to text maintaining the original transcriptions. Results on different text domains (scene text, machine printed text and handwritten text) and cross-modal results demonstrate that this is feasible, and open different research lines. Furthermore, two architectures for selective style transfer, which means
transferring style to only desired image pixels, are proposed. Finally, scene text selective style transfer is evaluated as a data augmentation technique to expand scene text detection datasets, resulting in a boost of text detectors performance. Our implementation of the described models is publicly available.
Keywords: transfer; text style transfer; data augmentation; scene text detection
|
|
|
Jon Almazan, Albert Gordo, Alicia Fornes and Ernest Valveny. 2014. Segmentation-free Word Spotting with Exemplar SVMs. PR, 47(12), 3967–3978.
Abstract: In this paper we propose an unsupervised segmentation-free method for word spotting in document images. Documents are represented with a grid of HOG descriptors, and a sliding-window approach is used to locate the document regions that are most similar to the query. We use the Exemplar SVM framework to produce a better representation of the query in an unsupervised way. Then, we use a more discriminative representation based on Fisher Vector to rerank the best regions retrieved, and the most promising ones are used to expand the Exemplar SVM training set and improve the query representation. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage.
Keywords: Word spotting; Segmentation-free; Unsupervised learning; Reranking; Query expansion; Compression
|
|
|
Pau Torras, Mohamed Ali Souibgui, Sanket Biswas and Alicia Fornes. 2023. Segmentation-Free Alignment of Arbitrary Symbol Transcripts to Images. Document Analysis and Recognition – ICDAR 2023 Workshops.83–93. (LNCS.)
Abstract: Developing arbitrary symbol recognition systems is a challenging endeavour. Even using content-agnostic architectures such as few-shot models, performance can be substantially improved by providing a number of well-annotated examples into training. In some contexts, transcripts of the symbols are available without any position information associated to them, which enables using line-level recognition architectures. A way of providing this position information to detection-based architectures is finding systems that can align the input symbols with the transcription. In this paper we discuss some symbol alignment techniques that are suitable for low-data scenarios and provide an insight on their perceived strengths and weaknesses. In particular, we study the usage of Connectionist Temporal Classification models, Attention-Based Sequence to Sequence models and we compare them with the results obtained on a few-shot recognition system.
Keywords: Historical Manuscripts; Symbol Alignment
|
|
|
Dimosthenis Karatzas, Marçal Rusiñol, Coen Antens and Miquel Ferrer. 2008. Segmentation Robust to the Vignette Effect for Machine Vision Systems. 19th International Conference on Pattern Recognition.
Abstract: The vignette effect (radial fall-off) is commonly encountered in images obtained through certain image acquisition setups and can seriously hinder automatic analysis processes. In this paper we present a fast and efficient method for dealing with vignetting in the context of object segmentation in an existing industrial inspection setup. The vignette effect is modelled here as a circular, non-linear gradient. The method estimates the gradient parameters and employs them to perform segmentation. Segmentation results on a variety of images indicate that the presented method is able to successfully tackle the vignette effect.
|
|
|
Gemma Sanchez, Josep Llados and Enric Marti. 1997. Segmentation and analysis of linial texture in plans. Actes de la conférence Artificielle et Complexité.. Paris.
Abstract: The problem of texture segmentation and interpretation is one of the main concerns in the field of document analysis. Graphical documents often contain areas characterized by a structural texture whose recognition allows both the document understanding, and its storage in a more compact way. In this work, we focus on structural linial textures of regular repetition contained in plan documents. Starting from an atributed graph which represents the vectorized input image, we develop a method to segment textured areas and recognize their placement rules. We wish to emphasize that the searched textures do not follow a predefined pattern. Minimal closed loops of the input graph are computed, and then hierarchically clustered. In this hierarchical clustering, a distance function between two closed loops is defined in terms of their areas difference and boundary resemblance computed by a string matching procedure. Finally it is noted that, when the texture consists of isolated primitive elements, the same method can be used after computing a Voronoi Tesselation of the input graph.
Keywords: Structural Texture, Voronoi, Hierarchical Clustering, String Matching.
|
|