%0 Journal Article %T Asking questions on handwritten document collections %A Minesh Mathew %A Lluis Gomez %A Dimosthenis Karatzas %A C.V. Jawahar %J International Journal on Document Analysis and Recognition %D 2021 %V 24 %F Minesh Mathew2021 %O DAG; 600.121 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3621), last updated on Fri, 07 Jan 2022 14:02:12 +0100 %X This work addresses the problem of Question Answering (QA) on handwritten document collections. Unlike typical QA and Visual Question Answering (VQA) formulations where the answer is a short text, we aim to locate a document snippet where the answer lies. The proposed approach works without recognizing the text in the documents. We argue that the recognition-free approach is suitable for handwritten documents and historical collections where robust text recognition is often difficult. At the same time, for human users, document image snippets containing answers act as a valid alternative to textual answers. The proposed approach uses an off-the-shelf deep embedding network which can project both textual words and word images into a common sub-space. This embedding bridges the textual and visual domains and helps us retrieve document snippets that potentially answer a question. We evaluate results of the proposed approach on two new datasets: (i) HW-SQuAD: a synthetic, handwritten document image counterpart of SQuAD1.0 dataset and (ii) BenthamQA: a smaller set of QA pairs defined on documents from the popular Bentham manuscripts collection. We also present a thorough analysis of the proposed recognition-free approach compared to a recognition-based approach which uses text recognized from the images using an OCR. Datasets presented in this work are available to download at docvqa.org. %U https://link.springer.com/article/10.1007/s10032-021-00383-3 %U http://refbase.cvc.uab.es/files/MGK2021.pdf %P 235-249