|
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez and Dimosthenis Karatzas. 2020. Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual Features. IEEE Winter Conference on Applications of Computer Vision.
Abstract: Text contained in an image carries high-level semantics that can be exploited to achieve richer image understanding. In particular, the mere presence of text provides strong guiding content that should be employed to tackle a diversity of computer vision tasks such as image retrieval, fine-grained classification, and visual question answering. In this paper, we address the problem of fine-grained classification and image retrieval by leveraging textual information along with visual cues to comprehend the existing intrinsic relation between the two modalities. The novelty of the proposed model consists of the usage of a PHOC descriptor to construct a bag of textual words along with a Fisher Vector Encoding that captures the morphology of text. This approach provides a stronger multimodal representation for this task and as our experiments demonstrate, it achieves state-of-the-art results on two different tasks, fine-grained classification and image retrieval.
|
|
|
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez and Dimosthenis Karatzas. 2021. Multi-modal reasoning graph for scene-text based fine-grained image classification and retrieval. IEEE Winter Conference on Applications of Computer Vision.4022–4032.
|
|
|
Andres Mafla, Rafael S. Rezende, Lluis Gomez, Diana Larlus and Dimosthenis Karatzas. 2021. StacMR: Scene-Text Aware Cross-Modal Retrieval. IEEE Winter Conference on Applications of Computer Vision.2219–2229.
|
|
|
Minesh Mathew, Dimosthenis Karatzas and C.V. Jawahar. 2021. DocVQA: A Dataset for VQA on Document Images. IEEE Winter Conference on Applications of Computer Vision.2200–2209.
Abstract: We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa. org
|
|
|
Arka Ujjal Dey, Suman Ghosh and Ernest Valveny. 2018. Don't only Feel Read: Using Scene text to understand advertisements. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.
Abstract: We propose a framework for automated classification of Advertisement Images, using not just Visual features but also Textual cues extracted from embedded text. Our approach takes inspiration from the assumption that Ad images contain meaningful textual content, that can provide discriminative semantic interpretetion, and can thus aid in classifcation tasks. To this end, we develop a framework using off-the-shelf components, and demonstrate the effectiveness of Textual cues in semantic Classfication tasks.
|
|
|
Dena Bazazian, Dimosthenis Karatzas and Andrew Bagdanov. 2018. Word Spotting in Scene Images based on Character Recognition. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.1872–1874.
Abstract: In this paper we address the problem of unconstrained Word Spotting in scene images. We train a Fully Convolutional Network to produce heatmaps of all the character classes. Then, we employ the Text Proposals approach and, via a rectangle classifier, detect the most likely rectangle for each query word based on the character attribute maps. We evaluate the proposed method on ICDAR2015 and show that it is capable of identifying and recognizing query words in natural scene images.
|
|
|
Josep Llados, Enric Marti and Jordi Regincos. 1993. Interpretación de diseños a mano alzada como técnica de entrada a un sistema CAD en un ámbito de arquitectura. III National Conference on Computer Graphics. Granada.
Abstract: En los últimos años, se ha introducido ámpliamente el uso de los sistemas CAD en dominios relacionados con la arquitectura. Dichos sistemas CAD son muy útiles para el arquitecto en el diseño de planos de plantas de edificios. Sin embargo, la utilización eficiente de un CAD requiere un tiempo de aprendizaje, en especial, en la etapa de creación y edición del diseño. Además, una vez familiarizado con un CAD, el arquitecto debe adaptarse a la simbología que éste le permite que, en algunos casos puede ser poco flexible.Con esta motivación, se propone una técnica alternativa de entrada de documentos en sistemas CAD. Dicha técnica se basa en el diseño del plano sobre papel mediante un dibujo lineal hecho a mano alzada a modo de boceto e introducido mediante scanner. Una vez interpretado este dibujo inicial e introducido en el CAD, el arquitecto sólo deber hacer sobre éste los retoques finales del documento.El sistema de entrada propuesto se compone de dos módulos principales: En primer lugar, la extracción de características (puntos característicos, rectas y arcos) de la imagen obtenida mediante scanner. En dicho módulo se aplican principalmente técnicas de procesamiento de imágenes obteniendo como resultado una representaci¢n del dibujo de entrada basada en grafos de atributos. El objetivo del segundo módulo es el de encontrar y reconocer las entidades integrantes del documento (puertas, mesas, etc.) en base a una biblioteca de símbolos definida en el sistema CAD. La implementación de dicho módulo se basa en técnicas de isomorfismo de grafos.El sistema propone una alternativa que permita, mediante el diseño a mano alzada, la introducción de la informaci¢n m s significativa del plano de forma rápida, sencilla y estandarizada por parte del usuario.
|
|
|
Oriol Ramos Terrades and Ernest Valveny. 2003. Line Detection Using Ridgelets Transform for Graphic Symbol Representation.
|
|
|
Partha Pratim Roy, Umapada Pal and Josep Llados. 2009. Touching Text Character Localization in Graphical Documents using SIFT. In proceedings 8th IAPR International Workshop on Graphics Recognition.
Abstract: Interpretation of graphical document images is a challenging task as it requires proper understanding of text/graphics symbols present in such documents. Difficulties arise in graphical document recognition when text and symbol overlapped/touched. Intersection of text and symbols with graphical lines and curves occur frequently in graphical documents and hence separation of such symbols is very difficult.
Several pattern recognition and classification techniques exist to recognize isolated text/symbol. But, the touching/overlapping text and symbol recognition has not yet been dealt successfully. An interesting technique, Scale Invariant Feature Transform (SIFT), originally devised for object recognition can take care of overlapping problems. Even if SIFT features have emerged as a very powerful object descriptors, their employment in graphical documents context has not been investigated much. In this paper we present the adaptation of the SIFT approach in the context of text character localization (spotting) in graphical documents. We evaluate the applicability of this technique in such documents and discuss the scope of improvement by combining some state-of-the-art approaches.
|
|
|
Alicia Fornes, Josep Llados, Gemma Sanchez and Horst Bunke. 2009. Symbol-independent writer identification in old handwritten music scores. In proceedings of 8th IAPR International Workshop on Graphics Recognition. Springer Berlin Heidelberg, 186–197.
|
|