|
Francesc Tous, Agnes Borras, Robert Benavente, Ramon Baldrich, Maria Vanrell and Josep Llados. 2002. Textual Descriptors for browsing people by visual appearence. 5è. Congrés Català d’Intel·ligència Artificial CCIA.
Abstract: This paper presents a first approach to build colour and structural descriptors for information retrieval on a people database. Queries are formulated in terms of their appearance that allows to seek people wearing specific clothes of a given colour name or texture. Descriptors are automatically computed by following three essential steps. A colour naming labelling from pixel properties. A region seg- mentation step based on colour properties of pixels combined with edge information. And a high level step that models the region arrangements in order to build clothes structure. Results are tested on large set of images from real scenes taken at the entrance desk of a building.
Keywords: Image retrieval, textual descriptors, colour naming, colour normalization, graph matching.
|
|
|
Ilke Demir, Dena Bazazian, Adriana Romero, Viktoriia Sharmanska and Lyne P. Tchapmi. 2018. WiCV 2018: The Fourth Women In Computer Vision Workshop. 4th Women in Computer Vision Workshop.1941–19412.
Abstract: We present WiCV 2018 – Women in Computer Vision Workshop to increase the visibility and inclusion of women researchers in computer vision field, organized in conjunction with CVPR 2018. Computer vision and machine learning have made incredible progress over the past years, yet the number of female researchers is still low both in academia and industry. WiCV is organized to raise visibility of female researchers, to increase the collaboration,
and to provide mentorship and give opportunities to femaleidentifying junior researchers in the field. In its fourth year, we are proud to present the changes and improvements over the past years, summary of statistics for presenters and attendees, followed by expectations from future generations.
Keywords: Conferences; Computer vision; Industries; Object recognition; Engineering profession; Collaboration; Machine learning
|
|
|
Mohamed Ali Souibgui, Y.Kessentini and Alicia Fornes. 2020. A conditional GAN based approach for distorted camera captured documents recovery. 4th Mediterranean Conference on Pattern Recognition and Artificial Intelligence.
|
|
|
Pau Torras, Arnau Baro, Alicia Fornes and Lei Kang. 2022. Improving Handwritten Music Recognition through Language Model Integration. 4th International Workshop on Reading Music Systems (WoRMS2022).42–46.
Abstract: Handwritten Music Recognition, especially in the historical domain, is an inherently challenging endeavour; paper degradation artefacts and the ambiguous nature of handwriting make recognising such scores an error-prone process, even for the current state-of-the-art Sequence to Sequence models. In this work we propose a way of reducing the production of statistically implausible output sequences by fusing a Language Model into a recognition Sequence to Sequence model. The idea is leveraging visually-conditioned and context-conditioned output distributions in order to automatically find and correct any mistakes that would otherwise break context significantly. We have found this approach to improve recognition results to 25.15 SER (%) from a previous best of 31.79 SER (%) in the literature.
Keywords: optical music recognition; historical sources; diversity; music theory; digital humanities
|
|
|
Jialuo Chen, Mohamed Ali Souibgui, Alicia Fornes and Beata Megyesi. 2021. Unsupervised Alphabet Matching in Historical Encrypted Manuscript Images. 4th International Conference on Historical Cryptology.34–37.
Abstract: Historical ciphers contain a wide range ofsymbols from various symbol sets. Iden-tifying the cipher alphabet is a prerequi-site before decryption can take place andis a time-consuming process. In this workwe explore the use of image processing foridentifying the underlying alphabet in ci-pher images, and to compare alphabets be-tween ciphers. The experiments show thatciphers with similar alphabets can be suc-cessfully discovered through clustering.
|
|
|
Miquel Ferrer, Ernest Valveny and F. Serratosa. 2009. Median Graph Computation by means of a Genetic Approach Based on Minimum Common Supergraph and Maximum Common Subraph. 4th Iberian Conference on Pattern Recognition and Image Analysis. Springer Berlin Heidelberg, 346–353. (LNCS.)
Abstract: Given a set of graphs, the median graph has been theoretically presented as a useful concept to infer a representative of the set. However, the computation of the median graph is a highly complex task and its practical application has been very limited up to now. In this work we present a new genetic algorithm for the median graph computation. A set of experiments on real data, where none of the existing algorithms for the median graph computation could be applied up to now due to their computational complexity, show that we obtain good approximations of the median graph. Finally, we use the median graph in a real nearest neighbour classification showing that it leaves the box of the only-theoretical concepts and demonstrating, from a practical point of view, that can be a useful tool to represent a set of graphs.
|
|
|
Albert Gordo and Ernest Valveny. 2009. The diagonal split: A pre-segmentation step for page layout analysis & classification. 4th Iberian Conference on Pattern Recognition and Image Analysis. Springer Berlin Heidelberg, 290–297. (LNCS.)
Abstract: Document classification is an important task in all the processes related to document storage and retrieval. In the case of complex documents, structural features are needed to achieve a correct classification. Unfortunately, physical layout analysis is error prone. In this paper we present a pre-segmentation step based on a divide & conquer strategy that can be used to improve the page segmentation results, independently of the segmentation algorithm used. This pre-segmentation step is evaluated in classification and retrieval using the selective CRLA algorithm for layout segmentation together with a clustering based on the voronoi area diagram, and tested on two different databases, MARG and Girona Archives.
|
|
|
Lei Kang, Juan Ignacio Toledo, Pau Riba, Mauricio Villegas, Alicia Fornes and Marçal Rusiñol. 2018. Convolve, Attend and Spell: An Attention-based Sequence-to-Sequence Model for Handwritten Word Recognition. 40th German Conference on Pattern Recognition.459–472.
Abstract: This paper proposes Convolve, Attend and Spell, an attention based sequence-to-sequence model for handwritten word recognition. The proposed architecture has three main parts: an encoder, consisting of a CNN and a bi-directional GRU, an attention mechanism devoted to focus on the pertinent features and a decoder formed by a one-directional GRU, able to spell the corresponding word, character by character. Compared with the recent state-of-the-art, our model achieves competitive results on the IAM dataset without needing any pre-processing step, predefined lexicon nor language model. Code and additional results are available in https://github.com/omni-us/research-seq2seq-HTR.
|
|
|
Josep Llados. 2021. The 5G of Document Intelligence. 3rd Workshop on Future of Document Analysis and Recognition.
|
|
|
Ali Furkan Biten and 8 others. 2019. ICDAR 2019 Competition on Scene Text Visual Question Answering. 3rd Workshop on Closing the Loop Between Vision and Language, in conjunction with ICCV2019.
Abstract: This paper presents final results of ICDAR 2019 Scene Text Visual Question Answering competition (ST-VQA). ST-VQA introduces an important aspect that is not addressed
by any Visual Question Answering system up to date, namely the incorporation of scene text to answer questions asked about an image. The competition introduces a new dataset comprising 23, 038 images annotated with 31, 791 question / answer pairs where the answer is always grounded on text instances present in the image. The images are taken from 7 different public computer vision datasets, covering a wide range of scenarios.
The competition was structured in three tasks of increasing difficulty, that require reading the text in a scene and understanding it in the context of the scene, to correctly answer a given question. A novel evaluation metric is presented, which elegantly assesses both key capabilities expected from an optimal model: text recognition and image understanding. A detailed analysis of results from different participants is showcased, which provides insight into the current capabilities of VQA systems that can read. We firmly believe the dataset proposed in this challenge will be an important milestone to consider towards a path of more robust and general models that
can exploit scene text to achieve holistic image understanding.
|
|