Victor Vaquero, German Ros, Francesc Moreno-Noguer, Antonio Lopez, & Alberto Sanfeliu. (2017). Joint coarse-and-fine reasoning for deep optical flow. In 24th International Conference on Image Processing (pp. 2558–2562).
Abstract: We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.
|
David Guillamet, & B. Moghaddam. (2002). Joint Distribution of Local Image Features for Appearance Moldeling..
|
Manuel Carbonell, Mauricio Villegas, Alicia Fornes, & Josep Llados. (2018). Joint Recognition of Handwritten Text and Named Entities with a Neural End-to-end Model. In 13th IAPR International Workshop on Document Analysis Systems (pp. 399–404).
Abstract: When extracting information from handwritten documents, text transcription and named entity recognition are usually faced as separate subsequent tasks. This has the disadvantage that errors in the first module affect heavily the
performance of the second module. In this work we propose to do both tasks jointly, using a single neural network with a common architecture used for plain text recognition. Experimentally, the work has been tested on a collection of historical marriage records. Results of experiments are presented to show the effect on the performance for different
configurations: different ways of encoding the information, doing or not transfer learning and processing at text line or multi-line region level. The results are comparable to state of the art reported in the ICDAR 2017 Information Extraction competition, even though the proposed technique does not use any dictionaries, language modeling or post processing.
Keywords: Named entity recognition; Handwritten Text Recognition; neural networks
|
Ferran Diego, Joan Serrat, & Antonio Lopez. (2013). Joint spatio-temporal alignment of sequences. TMM - IEEE Transactions on Multimedia, 15(6), 1377–1387.
Abstract: Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times.
Keywords: video alignment
|
Mohammad Ali Bagheri, Qigang Gao, Sergio Escalera, Albert Clapes, Kamal Nasrollahi, Michael Holte, et al. (2015). Keep it Accurate and Diverse: Enhancing Action Recognition Performance by Ensemble Learning. In IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) (pp. 22–29).
Abstract: The performance of different action recognition techniques has recently been studied by several computer vision researchers. However, the potential improvement in classification through classifier fusion by ensemble-based methods has remained unattended. In this work, we evaluate the performance of an ensemble of action learning techniques, each performing the recognition task from a different perspective.
The underlying idea is that instead of aiming a very sophisticated and powerful representation/learning technique, we can learn action categories using a set of relatively simple and diverse classifiers, each trained with different feature set. In addition, combining the outputs of several learners can reduce the risk of an unfortunate selection of a learner on an unseen action recognition scenario.
This leads to having a more robust and general-applicable framework. In order to improve the recognition performance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use
of diversity of base learners trained on different sources of information. The recognition results of the individual classifiers are compared with those obtained from fusing the classifiers’ output, showing enhanced performance of the proposed methodology.
|
Hongxing Gao, Marçal Rusiñol, Dimosthenis Karatzas, Josep Llados, Tomokazu Sato, Masakazu Iwamura, et al. (2013). Key-region detection for document images -applications to administrative document retrieval. In 12th International Conference on Document Analysis and Recognition (pp. 230–234).
Abstract: In this paper we argue that a key-region detector designed to take into account the special characteristics of document images can result in the detection of less and more meaningful key-regions. We propose a fast key-region detector able to capture aspects of the structural information of the document, and demonstrate its efficiency by comparing against standard detectors in an administrative document retrieval scenario. We show that using the proposed detector results to a smaller number of detected key-regions and higher performance without any drop in speed compared to standard state of the art detectors.
|
Axel Barroso-Laguna, Edgar Riba, Daniel Ponsa, & Krystian Mikolajczyk. (2019). Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters. In 18th IEEE International Conference on Computer Vision (pp. 5835–5843).
Abstract: We introduce a novel approach for keypoint detection task that combines handcrafted and learned CNN filters within a shallow multi-scale architecture. Handcrafted filters provide anchor structures for learned filters, which localize, score and rank repeatable features. Scale-space representation is used within the network to extract keypoints at different levels. We design a loss function to detect robust features that exist across a range of scales and to maximize the repeatability score. Our Key.Net model is trained on data synthetically created from ImageNet and evaluated on HPatches benchmark. Results show that our approach outperforms state-of-the-art detectors in terms of repeatability, matching performance and complexity.
|
Volkmar Frinken, Andreas Fischer, Markus Baumgartner, & Horst Bunke. (2014). Keyword spotting for self-training of BLSTM NN based handwriting recognition systems. PR - Pattern Recognition, 47(3), 1073–1082.
Abstract: The automatic transcription of unconstrained continuous handwritten text requires well trained recognition systems. The semi-supervised paradigm introduces the concept of not only using labeled data but also unlabeled data in the learning process. Unlabeled data can be gathered at little or not cost. Hence it has the potential to reduce the need for labeling training data, a tedious and costly process. Given a weak initial recognizer trained on labeled data, self-training can be used to recognize unlabeled data and add words that were recognized with high confidence to the training set for re-training. This process is not trivial and requires great care as far as selecting the elements that are to be added to the training set is concerned. In this paper, we propose to use a bidirectional long short-term memory neural network handwritten recognition system for keyword spotting in order to select new elements. A set of experiments shows the high potential of self-training for bootstrapping handwriting recognition systems, both for modern and historical handwritings, and demonstrate the benefits of using keyword spotting over previously published self-training schemes.
Keywords: Document retrieval; Keyword spotting; Handwriting recognition; Neural networks; Semi-supervised learning
|
B. Gautam, Oriol Ramos Terrades, Joana Maria Pujadas-Mora, & Miquel Valls-Figols. (2020). Knowledge graph based methods for record linkage. PRL - Pattern Recognition Letters, 136, 127–133.
Abstract: Nowadays, it is common in Historical Demography the use of individual-level data as a consequence of a predominant life-course approach for the understanding of the demographic behaviour, family transition, mobility, etc. Advanced record linkage is key since it allows increasing the data complexity and its volume to be analyzed. However, current methods are constrained to link data from the same kind of sources. Knowledge graph are flexible semantic representations, which allow to encode data variability and semantic relations in a structured manner.
In this paper we propose the use of knowledge graph methods to tackle record linkage tasks. The proposed method, named WERL, takes advantage of the main knowledge graph properties and learns embedding vectors to encode census information. These embeddings are properly weighted to maximize the record linkage performance. We have evaluated this method on benchmark data sets and we have compared it to related methods with stimulating and satisfactory results.
|
Christophe Rigaud, Clement Guerin, Dimosthenis Karatzas, Jean-Christophe Burie, & Jean-Marc Ogier. (2015). Knowledge-driven understanding of images in comic books. IJDAR - International Journal on Document Analysis and Recognition, 18(3), 199–221.
Abstract: Document analysis is an active field of research, which can attain a complete understanding of the semantics of a given document. One example of the document understanding process is enabling a computer to identify the key elements of a comic book story and arrange them according to a predefined domain knowledge. In this study, we propose a knowledge-driven system that can interact with bottom-up and top-down information to progressively understand the content of a document. We model the comic book’s and the image processing domains knowledge for information consistency analysis. In addition, different image processing methods are improved or developed to extract panels, balloons, tails, texts, comic characters and their semantic relations in an unsupervised way.
Keywords: Document Understanding; comics analysis; expert system
|
Edgar Riba, D. Mishkin, Daniel Ponsa, E. Rublee, & G. Bradski. (2020). Kornia: an Open Source Differentiable Computer Vision Library for PyTorch. In IEEE Winter Conference on Applications of Computer Vision.
|
Jian Yang, Alejandro F. Frangi, Jing-Yu Yang, David Zhang, & Zhong Jin. (2005). KPCA Plus LDA: A Complete Kernel Fisher Discriminant Framework for Feature Extraction and Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2):230–244 (IF: 3.810).
|
Robert Benavente, C. Alejandro Parraga, & Maria Vanrell. (2010). La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas. In 9th Congreso Nacional del Color (92–95).
Abstract: En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto.
Keywords: Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos
|
Alicia Fornes, Josep Llados, Oriol Ramos Terrades, & Marçal Rusiñol. (2016). La Visió per Computador com a Eina per a la Interpretació Automàtica de Fonts Documentals. Lligall, Revista Catalana d'Arxivística, 20–46.
|
Oriol Vicente, Alicia Fornes, & Ramon Valdes. (2017). La Xarxa d Humanitats Digitals de la UABCie: una estructura inteligente para la investigación y la transferencia en Humanidades. In 3rd Congreso Internacional de Humanidades Digitales Hispánicas. Sociedad Internacional (pp. 281–383).
|