|
Miquel Ferrer, Ernest Valveny and F. Serratosa. 2007. Bounding the Size Of the Median Graph. 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4478(2):491–498.
|
|
|
Alicia Fornes, Sergio Escalera, Josep Llados, Gemma Sanchez, Petia Radeva and Oriol Pujol. 2007. Handwritten Symbol Recognition by a Boosted Blurred Shape Model with Error Correction. 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:13–21.
|
|
|
Agnes Borras and Josep Llados. 2007. Similarity-Based Object Retrieval Using Appearance and Geometric Feature Combination. 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:113–120.33–39.
Abstract: This work presents a content-based image retrieval system of general purpose that deals with cluttered scenes containing a given query object. The system is flexible enough to handle with a single image of an object despite its rotation, translation and scale variations. The image content is divided in parts that are described with a combination of features based on geometrical and color properties. The idea behind the feature combination is to benefit from a fuzzy similarity computation that provides robustness and tolerance to the retrieval process. The features can be independently computed and the image parts can be easily indexed by using a table structure on every feature value. Finally a process inspired in the alignment strategies is used to check the coherence of the object parts found in a scene. Our work presents a system of easy implementation that uses an open set of features and can suit a wide variety of applications.
|
|
|
Jose Antonio Rodriguez, Gemma Sanchez and Josep Llados. 2006. Automatic Interpretation of Proofreading Sketches.
|
|
|
Joan Mas, B. Lamiroy, Gemma Sanchez and Josep Llados. 2006. Automatic Learning of Symbol Descriptions Avoiding Topological Ambiguities.
|
|
|
Oriol Vicente, Alicia Fornes and Ramon Valdes. 2017. La Xarxa d Humanitats Digitals de la UABCie: una estructura inteligente para la investigación y la transferencia en Humanidades. 3rd Congreso Internacional de Humanidades Digitales Hispánicas. Sociedad Internacional.281–383.
|
|
|
Minesh Mathew, Ruben Tito, Dimosthenis Karatzas, R.Manmatha and C.V. Jawahar. 2020. Document Visual Question Answering Challenge 2020. 33rd IEEE Conference on Computer Vision and Pattern Recognition – Short paper.
Abstract: This paper presents results of Document Visual Question Answering Challenge organized as part of “Text and Documents in the Deep Learning Era” workshop, in CVPR 2020. The challenge introduces a new problem – Visual Question Answering on document images. The challenge comprised two tasks. The first task concerns with asking questions on a single document image. On the other hand, the second task is set as a retrieval task where the question is posed over a collection of images. For the task 1 a new dataset is introduced comprising 50,000 questions-answer(s) pairs defined over 12,767 document images. For task 2 another dataset has been created comprising 20 questions over 14,362 document images which share the same document template.
|
|
|
Marçal Rusiñol, David Aldavert, Dimosthenis Karatzas, Ricardo Toledo and Josep Llados. 2011. Interactive Trademark Image Retrieval by Fusing Semantic and Visual Content. Advances in Information Retrieval. In P. Clough and 6 others, eds. 33rd European Conference on Information Retrieval. Berlin, Springer, 314–325. (LNCS.)
Abstract: In this paper we propose an efficient queried-by-example retrieval system which is able to retrieve trademark images by similarity from patent and trademark offices' digital libraries. Logo images are described by both their semantic content, by means of the Vienna codes, and their visual contents, by using shape and color as visual cues. The trademark descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time. The resulting ranked lists are combined by using the Condorcet method and a relevance feedback step helps to iteratively revise the query and refine the obtained results. The experiments demonstrate the effectiveness and efficiency of this system on a realistic and large dataset.
|
|
|
Anjan Dutta and Zeynep Akata. 2019. Semantically Tied Paired Cycle Consistency for Zero-Shot Sketch-based Image Retrieval. 32nd IEEE Conference on Computer Vision and Pattern Recognition.5089–5098.
Abstract: Zero-shot sketch-based image retrieval (SBIR) is an emerging task in computer vision, allowing to retrieve natural images relevant to sketch queries that might not been seen in the training phase. Existing works either require aligned sketch-image pairs or inefficient memory fusion layer for mapping the visual information to a semantic space. In this work, we propose a semantically aligned paired cycle-consistent generative (SEM-PCYC) model for zero-shot SBIR, where each branch maps the visual information to a common semantic space via an adversarial training. Each of these branches maintains a cycle consistency that only requires supervision at category levels, and avoids the need of highly-priced aligned sketch-image pairs. A classification criteria on the generators' outputs ensures the visual to semantic space mapping to be discriminating. Furthermore, we propose to combine textual and hierarchical side information via a feature selection auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in zero-shot SBIR performance over the state-of-the-art on the challenging Sketchy and TU-Berlin datasets.
|
|
|
Ali Furkan Biten, Lluis Gomez, Marçal Rusiñol and Dimosthenis Karatzas. 2019. Good News, Everyone! Context driven entity-aware captioning for news images. 32nd IEEE Conference on Computer Vision and Pattern Recognition.12458–12467.
Abstract: Current image captioning systems perform at a merely descriptive level, essentially enumerating the objects in the scene and their relations. Humans, on the contrary, interpret images by integrating several sources of prior knowledge of the world. In this work, we aim to take a step closer to producing captions that offer a plausible interpretation of the scene, by integrating such contextual information into the captioning pipeline. For this we focus on the captioning of images used to illustrate news articles. We propose a novel captioning method that is able to leverage contextual information provided by the text of news articles associated with an image. Our model is able to selectively draw information from the article guided by visual cues, and to dynamically extend the output dictionary to out-of-vocabulary named entities that appear in the context source. Furthermore we introduce“ GoodNews”, the largest news image captioning dataset in the literature and demonstrate state-of-the-art results.
|
|