|
Records |
Links |
|
Author |
Ruben Tito; Dimosthenis Karatzas; Ernest Valveny |


|
|
Title |
Hierarchical multimodal transformers for Multi-Page DocVQA |
Type |
Journal Article |
|
Year |
2023 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
144 |
Issue |
|
Pages |
109834 |
|
|
Keywords |
|
|
|
Abstract |
Document Visual Question Answering (DocVQA) refers to the task of answering questions from document images. Existing work on DocVQA only considers single-page documents. However, in real scenarios documents are mostly composed of multiple pages that should be processed altogether. In this work we extend DocVQA to the multi-page scenario. For that, we first create a new dataset, MP-DocVQA, where questions are posed over multi-page documents instead of single pages. Second, we propose a new hierarchical method, Hi-VT5, based on the T5 architecture, that overcomes the limitations of current methods to process long multi-page documents. The proposed method is based on a hierarchical transformer architecture where the encoder summarizes the most relevant information of every page and then, the decoder takes this summarized information to generate the final answer. Through extensive experimentation, we demonstrate that our method is able, in a single stage, to answer the questions and provide the page that contains the relevant information to find the answer, which can be used as a kind of explainability measure. |
|
|
Address |
|
|
|
Corporate Author  |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
ISSN 0031-3203 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.155; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ TKV2023 |
Serial |
3825 |
|
Permanent link to this record |
|
|
|
|
Author |
Souhail Bakkali; Zuheng Ming; Mickael Coustaty; Marçal Rusiñol; Oriol Ramos Terrades |


|
|
Title |
VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification |
Type |
Journal Article |
|
Year |
2023 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
139 |
Issue |
|
Pages |
109419 |
|
|
Keywords |
|
|
|
Abstract |
Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream approach. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a common representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the common feature representation space}. Extensive experiments on public document classification datasets demonstrate the effectiveness and the generalization capacity of our model on both low-scale and large-scale datasets. |
|
|
Address |
|
|
|
Corporate Author  |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
ISSN 0031-3203 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.140; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BMC2023 |
Serial |
3826 |
|
Permanent link to this record |
|
|
|
|
Author |
Ruben Tito; Dimosthenis Karatzas; Ernest Valveny |


|
|
Title |
Hierarchical multimodal transformers for Multipage DocVQA |
Type |
Journal Article |
|
Year |
2023 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
144 |
Issue |
109834 |
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Existing work on DocVQA only considers single-page documents. However, in real applications documents are mostly composed of multiple pages that should be processed altogether. In this work, we propose a new multimodal hierarchical method Hi-VT5, that overcomes the limitations of current methods to process long multipage documents. In contrast to previous hierarchical methods that focus on different semantic granularity (He et al., 2021) or different subtasks (Zhou et al., 2022) used in image classification. Our method is a hierarchical transformer architecture where the encoder learns to summarize the most relevant information of every page and then, the decoder uses this summarized representation to generate the final answer, following a bottom-up approach. Moreover, due to the lack of multipage DocVQA datasets, we also introduce MP-DocVQA, an extension of SP-DocVQA where questions are posed over multipage documents instead of single pages. Through extensive experimentation, we demonstrate that Hi-VT5 is able, in a single stage, to answer the questions and provide the page that contains the answer, which can be used as a kind of explainability measure. |
|
|
Address |
|
|
|
Corporate Author  |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ TKV2023 |
Serial |
3836 |
|
Permanent link to this record |
|
|
|
|
Author |
Ayan Banerjee; Sanket Biswas; Josep Llados; Umapada Pal |


|
|
Title |
SemiDocSeg: Harnessing Semi-Supervised Learning for Document Layout Analysis |
Type |
Journal Article |
|
Year |
2024 |
Publication |
International Journal on Document Analysis and Recognition |
Abbreviated Journal |
IJDAR |
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Document layout analysis; Semi-supervised learning; Co-Occurrence matrix; Instance segmentation; Swin transformer |
|
|
Abstract |
Document Layout Analysis (DLA) is the process of automatically identifying and categorizing the structural components (e.g. Text, Figure, Table, etc.) within a document to extract meaningful content and establish the page's layout structure. It is a crucial stage in document parsing, contributing to their comprehension. However, traditional DLA approaches often demand a significant volume of labeled training data, and the labor-intensive task of generating high-quality annotated training data poses a substantial challenge. In order to address this challenge, we proposed a semi-supervised setting that aims to perform learning on limited annotated categories by eliminating exhaustive and expensive mask annotations. The proposed setting is expected to be generalizable to novel categories as it learns the underlying positional information through a support set and class information through Co-Occurrence that can be generalized from annotated categories to novel categories. Here, we first extract features from the input image and support set with a shared multi-scale feature acquisition backbone. Then, the extracted feature representation is fed to the transformer encoder as a query. Later on, we utilize a semantic embedding network before the decoder to capture the underlying semantic relationships and similarities between different instances, enabling the model to make accurate predictions or classifications with only a limited amount of labeled data. Extensive experimentation on competitive benchmarks like PRIMA, DocLayNet, and Historical Japanese (HJ) demonstrate that this generalized setup obtains significant performance compared to the conventional supervised approach. |
|
|
Address |
June 2024 |
|
|
Corporate Author  |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ BBL2024a |
Serial |
4001 |
|
Permanent link to this record |
|
|
|
|
Author |
Ernest Valveny; Enric Marti |


|
|
Title |
Deformable Template Matching within a Bayesian Framework for Hand-Written Graphic Symbol Recognition |
Type |
Journal Article |
|
Year |
2000 |
Publication |
Graphics Recognition Recent Advances |
Abbreviated Journal |
|
|
|
Volume |
1941 |
Issue |
|
Pages |
193-208 |
|
|
Keywords |
|
|
|
Abstract |
We describe a method for hand-drawn symbol recognition based on deformable template matching able to handle uncertainty and imprecision inherent to hand-drawing. Symbols are represented as a set of straight lines and their deformations as geometric transformations of these lines. Matching, however, is done over the original binary image to avoid loss of information during line detection. It is defined as an energy minimization problem, using a Bayesian framework which allows to combine fidelity to ideal shape of the symbol and flexibility to modify the symbol in order to get the best fit to the binary input image. Prior to matching, we find the best global transformation of the symbol to start the recognition process, based on the distance between symbol lines and image lines. We have applied this method to the recognition of dimensions and symbols in architectural floor plans and we show its flexibility to recognize distorted symbols. |
|
|
Address |
|
|
|
Corporate Author  |
Springer Verlag |
Thesis |
|
|
|
Publisher |
Springer Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG;IAM; |
Approved |
no |
|
|
Call Number |
IAM @ iam @ MVA2000 |
Serial |
1655 |
|
Permanent link to this record |