|
Records |
Links |
|
Author |
Ayan Banerjee; Sanket Biswas; Josep Llados; Umapada Pal |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creation |
Type |
Miscellaneous |
|
Year |
2024 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Object detection in documents is a key step to automate the structural elements identification process in a digital or scanned document through understanding the hierarchical structure and relationships between different elements. Large and complex models, while achieving high accuracy, can be computationally expensive and memory-intensive, making them impractical for deployment on resource constrained devices. Knowledge distillation allows us to create small and more efficient models that retain much of the performance of their larger counterparts. Here we present a graph-based knowledge distillation framework to correctly identify and localize the document objects in a document image. Here, we design a structured graph with nodes containing proposal-level features and edges representing the relationship between the different proposal regions. Also, to reduce text bias an adaptive node sampling strategy is designed to prune the weight distribution and put more weightage on non-text nodes. We encode the complete graph as a knowledge representation and transfer it from the teacher to the student through the proposed distillation loss by effectively capturing both local and global information concurrently. Extensive experimentation on competitive benchmarks demonstrates that the proposed framework outperforms the current state-of-the-art approaches. The code will be available at: this https URL. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ BBL2024b |
Serial |
4023 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohamed Ali Souibgui; Sanket Biswas; Andres Mafla; Ali Furkan Biten; Alicia Fornes; Yousri Kessentini; Josep Llados; Lluis Gomez; Dimosthenis Karatzas |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Text-DIAE: a self-supervised degradation invariant autoencoder for text recognition and document enhancement |
Type |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the 37th AAAI Conference on Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
37 |
Issue |
2 |
Pages |
|
|
|
Keywords |
Representation Learning for Vision; CV Applications; CV Language and Vision; ML Unsupervised; Self-Supervised Learning |
|
|
Abstract |
In this paper, we propose a Text-Degradation Invariant Auto Encoder (Text-DIAE), a self-supervised model designed to tackle two tasks, text recognition (handwritten or scene-text) and document image enhancement. We start by employing a transformer-based architecture that incorporates three pretext tasks as learning objectives to be optimized during pre-training without the usage of labelled data. Each of the pretext objectives is specifically tailored for the final downstream tasks. We conduct several ablation experiments that confirm the design choice of the selected pretext tasks. Importantly, the proposed model does not exhibit limitations of previous state-of-the-art methods based on contrastive losses, while at the same time requiring substantially fewer data samples to converge. Finally, we demonstrate that our method surpasses the state-of-the-art in existing supervised and self-supervised settings in handwritten and scene text recognition and document image enhancement. Our code and trained models will be made publicly available at https://github.com/dali92002/SSL-OCR |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
AAAI |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBM2023 |
Serial |
3848 |
|
Permanent link to this record |
|
|
|
|
Author |
Khanh Nguyen; Ali Furkan Biten; Andres Mafla; Lluis Gomez; Dimosthenis Karatzas |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Show, Interpret and Tell: Entity-Aware Contextualised Image Captioning in Wikipedia |
Type |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the 37th AAAI Conference on Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
37 |
Issue |
2 |
Pages |
1940-1948 |
|
|
Keywords |
|
|
|
Abstract |
Humans exploit prior knowledge to describe images, and are able to adapt their explanation to specific contextual information given, even to the extent of inventing plausible explanations when contextual information and images do not match. In this work, we propose the novel task of captioning Wikipedia images by integrating contextual knowledge. Specifically, we produce models that jointly reason over Wikipedia articles, Wikimedia images and their associated descriptions to produce contextualized captions. The same Wikimedia image can be used to illustrate different articles, and the produced caption needs to be adapted to the specific context allowing us to explore the limits of the model to adjust captions to different contextual information. Dealing with out-of-dictionary words and Named Entities is a challenging task in this domain. To address this, we propose a pre-training objective, Masked Named Entity Modeling (MNEM), and show that this pretext task results to significantly improved models. Furthermore, we verify that a model pre-trained in Wikipedia generalizes well to News Captioning datasets. We further define two different test splits according to the difficulty of the captioning task. We offer insights on the role and the importance of each modality and highlight the limitations of our model. |
|
|
Address |
Washington; USA; February 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
AAAI |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ NBM2023 |
Serial |
3860 |
|
Permanent link to this record |
|
|
|
|
Author |
Sounak Dey; Anjan Dutta; Suman Ghosh; Ernest Valveny; Josep Llados |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
Aligning Salient Objects to Queries: A Multi-modal and Multi-object Image Retrieval Framework |
Type |
Conference Article |
|
Year |
2018 |
Publication |
14th Asian Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
In this paper we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sketches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model learned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every dataset. |
|
|
Address |
Perth; Australia; December 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ACCV |
|
|
Notes |
DAG; 600.097; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DDG2018a |
Serial |
3151 |
|
Permanent link to this record |
|
|
|
|
Author |
Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
Interactive layout analysis and transcription systems for historic handwritten documents |
Type |
Conference Article |
|
Year |
2010 |
Publication |
10th ACM Symposium on Document Engineering |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
219–222 |
|
|
Keywords |
Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis |
|
|
Abstract |
The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process. |
|
|
Address |
Manchester, United Kingdom |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ACM |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @RTS2010 |
Serial |
1857 |
|
Permanent link to this record |
|
|
|
|
Author |
Raul Gomez; Yahui Liu; Marco de Nadai; Dimosthenis Karatzas; Bruno Lepri; Nicu Sebe |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Retrieval Guided Unsupervised Multi-domain Image to Image Translation |
Type |
Conference Article |
|
Year |
2020 |
Publication |
28th ACM International Conference on Multimedia |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ACM |
|
|
Notes |
DAG; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GLN2020 |
Serial |
3497 |
|
Permanent link to this record |
|
|
|
|
Author |
Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Document noise removal using sparse representations over learned dictionary |
Type |
Conference Article |
|
Year |
2013 |
Publication |
Symposium on Document engineering |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
161-168 |
|
|
Keywords |
|
|
|
Abstract |
best paper award
In this paper, we propose an algorithm for denoising document images using sparse representations. Following a training set, this algorithm is able to learn the main document characteristics and also, the kind of noise included into the documents. In this perspective, we propose to model the noise energy based on the normalized cross-correlation between pairs of noisy and non-noisy documents. Experimental
results on several datasets demonstrate the robustness of our method compared with the state-of-the-art. |
|
|
Address |
Barcelona; October 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-4503-1789-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ACM-DocEng |
|
|
Notes |
DAG; 600.061 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DTR2013a |
Serial |
2330 |
|
Permanent link to this record |
|
|
|
|
Author |
Gemma Sanchez; Josep Llados; Enric Marti |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Segmentation and analysis of linial texture in plans |
Type |
Conference Article |
|
Year |
1997 |
Publication |
Intelligence Artificielle et Complexité. |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Structural Texture, Voronoi, Hierarchical Clustering, String Matching. |
|
|
Abstract |
The problem of texture segmentation and interpretation is one of the main concerns in the field of document analysis. Graphical documents often contain areas characterized by a structural texture whose recognition allows both the document understanding, and its storage in a more compact way. In this work, we focus on structural linial textures of regular repetition contained in plan documents. Starting from an atributed graph which represents the vectorized input image, we develop a method to segment textured areas and recognize their placement rules. We wish to emphasize that the searched textures do not follow a predefined pattern. Minimal closed loops of the input graph are computed, and then hierarchically clustered. In this hierarchical clustering, a distance function between two closed loops is defined in terms of their areas difference and boundary resemblance computed by a string matching procedure. Finally it is noted that, when the texture consists of isolated primitive elements, the same method can be used after computing a Voronoi Tesselation of the input graph. |
|
|
Address |
Paris, France |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
Paris |
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
AERFAI |
|
|
Notes |
DAG;IAM; |
Approved |
no |
|
|
Call Number |
IAM @ iam @ SLM1997 |
Serial |
1649 |
|
Permanent link to this record |
|
|
|
|
Author |
Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find book details (via ISBN) isbn](http://refbase.cvc.uab.es/img/isbn.gif)
|
|
Title |
Efficient Exemplar Word Spotting |
Type |
Conference Article |
|
Year |
2012 |
Publication |
23rd British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
67.1- 67.11 |
|
|
Keywords |
|
|
|
Abstract |
In this paper we propose an unsupervised segmentation-free method for word spotting in document images.
Documents are represented with a grid of HOG descriptors, and a sliding window approach is used to locate the document regions that are most similar to the query. We use the exemplar SVM framework to produce a better representation of the query in an unsupervised way. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
1-901725-46-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
BMVC |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ AGF2012 |
Serial |
1984 |
|
Permanent link to this record |
|
|
|
|
Author |
Marçal Rusiñol; Lluis Gomez; A. Landman; M. Silva Constenla; Dimosthenis Karatzas |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
Automatic Structured Text Reading for License Plates and Utility Meters |
Type |
Conference Article |
|
Year |
2019 |
Publication |
BMVC Workshop on Visual Artificial Intelligence and Entrepreneurship |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Reading text in images has attracted interest from computer vision researchers for
many years. Our technology focuses on the extraction of structured text – such as serial
numbers, machine readings, product codes, etc. – so that it is able to center its attention just on the relevant textual elements. It is conceived to work in an end-to-end fashion, bypassing any explicit text segmentation stage. In this paper we present two different industrial use cases where we have applied our automatic structured text reading technology. In the first one, we demonstrate an outstanding performance when reading license plates compared to the current state of the art. In the second one, we present results on our solution for reading utility meters. The technology is commercialized by a recently created spin-off company, and both solutions are at different stages of integration with final clients. |
|
|
Address |
Cardiff; UK; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
BMVC-VAIE19 |
|
|
Notes |
DAG; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGL2019 |
Serial |
3283 |
|
Permanent link to this record |