%0 Journal Article %T Multimodal grid features and cell pointers for scene text visual question answering %A Lluis Gomez %A Ali Furkan Biten %A Ruben Tito %A Andres Mafla %A Marçal Rusiñol %A Ernest Valveny %A Dimosthenis Karatzas %J Pattern Recognition Letters %D 2021 %V 150 %F Lluis Gomez2021 %O DAG; 600.084; 600.121 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3620), last updated on Fri, 28 Jan 2022 10:29:23 +0100 %X This paper presents a new model for the task of scene text visual question answering. In this task questions about a given image can only be answered by reading and understanding scene text. Current state of the art models for this task make use of a dual attention mechanism in which one attention module attends to visual features while the other attends to textual features. A possible issue with this is that it makes difficult for the model to reason jointly about both modalities. To fix this problem we propose a new model that is based on an single attention mechanism that attends to multi-modal features conditioned to the question. The output weights of this attention module over a grid of multi-modal spatial features are interpreted as the probability that a certain spatial location of the image contains the answer text to the given question. Our experiments demonstrate competitive performance in two standard datasets with a model that is faster than previous methods at inference time. Furthermore, we also provide a novel analysis of the ST-VQA dataset based on a human performance study. Supplementary material, code, and data is made available through this link. %U https://www.sciencedirect.com/science/article/pii/S0167865521002336?via%3Dihub %U http://refbase.cvc.uab.es/files/GBT2021.pdf %P 242-249