%0 Generic %T TransferDoc: A Self-Supervised Transferable Document Representation Learning Model Unifying Vision and Language %A Souhail Bakkali %A Sanket Biswas %A Zuheng Ming %A Mickael Coustaty %A Marçal Rusiñol %A Oriol Ramos Terrades %A Josep Llados %D 2023 %F Souhail Bakkali2023 %O DAG %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3995), last updated on Wed, 31 Jan 2024 10:52:33 +0100 %X The field of visual document understanding has witnessed a rapid growth in emerging challenges and powerful multi-modal strategies. However, they rely on an extensive amount of document data to learn their pretext objectives in a ``pre-train-then-fine-tune'' paradigm and thus, suffer a significant performance drop in real-world online industrial settings. One major reason is the over-reliance on OCR engines to extract local positional information within a document page. Therefore, this hinders the model's generalizability, flexibility and robustness due to the lack of capturing global information within a document image. We introduce TransferDoc, a cross-modal transformer-based architecture pre-trained in a self-supervised fashion using three novel pretext objectives. TransferDoc learns richer semantic concepts by unifying language and visual representations, which enables the production of more transferable models. Besides, two novel downstream tasks have been introduced for a ``closer-to-real'' industrial evaluation scenario where TransferDoc outperforms other state-of-the-art approaches. %9 miscellaneous %U https://arxiv.org/abs/2309.05756 %U http://refbase.cvc.uab.es/files/BBM2023.pdf