PT Journal AU Souhail Bakkali Zuheng Ming Mickael Coustaty Marçal Rusiñol Oriol Ramos Terrades TI VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification SO Pattern Recognition JI PR PY 2023 BP 109419 VL 139 DI 10.1016/j.patcog.2023.109419 AB Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream approach. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a common representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the common feature representation space}. Extensive experiments on public document classification datasets demonstrate the effectiveness and the generalization capacity of our model on both low-scale and large-scale datasets. ER