|
Francisco Cruz. (2016). Probabilistic Graphical Models for Document Analysis (Oriol Ramos Terrades, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Latest advances in digitization techniques have fostered the interest in creating digital copies of collections of documents. Digitized documents permit an easy maintenance, loss-less storage, and efficient ways for transmission and to perform information retrieval processes. This situation has opened a new market niche to develop systems able to automatically extract and analyze information contained in these collections, specially in the ambit of the business activity.
Due to the great variety of types of documents this is not a trivial task. For instance, the automatic extraction of numerical data from invoices differs substantially from a task of text recognition in historical documents. However, in order to extract the information of interest, is always necessary to identify the area of the document where it is located. In the area of Document Analysis we refer to this process as layout analysis, which aims at identifying and categorizing the different entities that compose the document, such as text regions, pictures, text lines, or tables, among others. To perform this task it is usually necessary to incorporate a prior knowledge about the task into the analysis process, which can be modeled by defining a set of contextual relations between the different entities of the document. The use of context has proven to be useful to reinforce the recognition process and improve the results on many computer vision tasks. It presents two fundamental questions: What kind of contextual information is appropriate for a given task, and how to incorporate this information into the models.
In this thesis we study several ways to incorporate contextual information to the task of document layout analysis, and to the particular case of handwritten text line segmentation. We focus on the study of Probabilistic Graphical Models and other mechanisms for this purpose, and propose several solutions to these problems. First, we present a method for layout analysis based on Conditional Random Fields. With this model we encode local contextual relations between variables, such as pair-wise constraints. Besides, we encode a set of structural relations between different classes of regions at feature level. Second, we present a method based on 2D-Probabilistic Context-free Grammars to encode structural and hierarchical relations. We perform a comparative study between Probabilistic Graphical Models and this syntactic approach. Third, we propose a method for structured documents based on Bayesian Networks to represent the document structure, and an algorithm based in the Expectation-Maximization to find the best configuration of the page. We perform a thorough evaluation of the proposed methods on two particular collections of documents: a historical collection composed of ancient structured documents, and a collection of contemporary documents. In addition, we present a general method for the task of handwritten text line segmentation. We define a probabilistic framework where we combine the EM algorithm with variational approaches for computing inference and parameter learning on a Markov Random Field. We evaluate our method on several collections of documents, including a general dataset of annotated administrative documents. Results demonstrate the applicability of our method to real problems, and the contribution of the use of contextual information to this kind of problems.
|
|
|
David Berga, & Xavier Otazu. (2019). Computational modelingof visual attention: What do we know from physiology and psychophysics? In 8th Iberian Conference on Perception.
Abstract: Latest computer vision architectures use a chain of feedforward computations, mainly optimizing artificial neural networks for very specific tasks. Although their impressive performance (i.e. in saliency) using real image datasets, these models do not follow several biological principles of the human visual system (e.g. feedback and horizontal connections in cortex) and are unable to predict several visual tasks simultaneously. In this study we present biologically plausible computations from the early stages of the human visual system (i.e. retina and lateral geniculate nucleus) and lateral connections in V1. Despite the simplicity of these processes and without any type of training or optimization, simulations of firing-rate dynamics of V1 are able to predict bottom-up visual attention at distinct contexts (shown previously as well to predict visual discomfort, brightness and chromatic induction). We also show functional top-down selection mechanisms as feedback inhibition projections (i.e. prefrontal cortex for search/task-based attention and parietal area for inhibition of return). Distinct saliency model predictions are tested with eye tracking datasets in free-viewing and visual search tasks, using real images and synthetically-generated patterns. Results on predicting saliency and scanpaths show that artificial models do not outperform biologically-inspired ones (specifically for datasets that lack of common endogenous biases found in eye tracking experimentation), as well as, do not correctly predict contrast sensitivities in pop-out stimulus patterns. This work remarks the importance of considering biological principles of the visual system for building models that reproduce this (and any other) visual effects.
|
|
|
Sergio Escalera, Eloi Puertas, Petia Radeva, & Oriol Pujol. (2009). Multimodal laughter recognition in video conversations. In 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis (110–115).
Abstract: Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier.
|
|
|
Josep Llados, & Enric Marti. (1995). Interpretacio de dibuixos lineals mitjançant tècniques d isomorfisme entre grafs. In Trobada de Joves Investigadors.
Abstract: L’anàlisi de documents té com a objectiu la interpretació automàtica de documents impresos sobre paper, amb la finalitat d’obtenir una descripció simbòlica d’aquests, que permeti el seu emmagatzemament i posterior tractament computacional. Les tècniques basades en grafs relacionals d’atributs permeten representar de manera compacta la informació continguda en dibuixos lineals i mitjançant mecanismes d’isomorfisme entre grafs, reconèixer-hi certes estructures i d’aquesta manera, interpretar el document. En aquest treball es dóna una visió general de les tènciques de grafs aplicades al reconeixement visual d’objectes en problemes d’anàlisi de documents. Aquestes tècniques s’il·lustren amb un exemple de reconeixement de plànols dibuixats a mà alçada. Finalment es proposa la utilització de tècniques de Hough com a mecanisme per accelerar el procés de reconeixement aplicant un cert coneixement sobre el domini en el que es treballa
|
|
|
Jaume Garcia, Debora Gil, Francesc Carreras, Sandra Pujades, & R.Leta. (2007). Modelització 4-Dimensional de la Funció Siatólica del Ventricle Esquerre. In XIX Congrés de la Societat Catalana de Cardiologia de Barcelona (pp. 133–134). Barcelona (Spain).
Abstract: L’evolució tecnològica en el tractament de les imatges mèdiques permet reconstruir, amb el software apropiat, imatges tridimensionals de les estructures cardiovasculars i dotar-les de moviment. Les imatges 4D resultants faciliten l’estudi de la fisiopatologia de la insuficiència cardíaca en base als transtorns de l’activació electromecànica ventricular, el que pot ser d’interès en el procés de selecció de pacients candidats a teràpies de resincronització. Presentem els resultats preliminars de la reconstrucció 4D del ventricle esquerre (VE) a partir de les seqüències de tagging miocàrdic del VE.
|
|
|
Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, et al. (2021). Avalanche: an End-to-End Library for Continual Learning. In 34th IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 3595–3605).
Abstract: Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch. Avalanche is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.
|
|
|
Santiago Segui, Oriol Pujol, & Jordi Vitria. (2015). Learning to count with deep object features. In Deep Vision: Deep Learning in Computer Vision, CVPR 2015 Workshop (pp. 90–96).
Abstract: Learning to count is a learning strategy that has been recently proposed in the literature for dealing with problems where estimating the number of object instances in a scene is the final objective. In this framework, the task of learning to detect and localize individual object instances is seen as a harder task that can be evaded by casting the problem as that of computing a regression value from hand-crafted image features. In this paper we explore the features that are learned when training a counting convolutional neural
network in order to understand their underlying representation.
To this end we define a counting problem for MNIST data and show that the internal representation of the network is able to classify digits in spite of the fact that no direct supervision was provided for them during training.
We also present preliminary results about a deep network that is able to count the number of pedestrians in a scene.
|
|
|
Christophe Rigaud, & Clement Guerin. (2014). Localisation contextuelle des personnages de bandes dessinées. In Colloque International Francophone sur l'Écrit et le Document.
Abstract: Les auteurs proposent une méthode de localisation des personnages dans des cases de bandes dessinées en s'appuyant sur les caractéristiques des bulles de dialogue. L'évaluation montre un taux de localisation des personnages allant jusqu'à 65%.
|
|
|
Clement Guerin, Christophe Rigaud, Karell Bertet, Jean-Christophe Burie, Arnaud Revel, & Jean-Marc Ogier. (2014). Réduction de l’espace de recherche pour les personnages de bandes dessinées. In 19th National Congress Reconnaissance de Formes et l'Intelligence Artificielle.
Abstract: Les bandes dessinées représentent un patrimoine culturel important dans de nombreux pays et leur numérisation massive offre la possibilité d'effectuer des recherches dans le contenu des images. À ce jour, ce sont principalement les structures des pages et leurs contenus textuels qui ont été étudiés, peu de travaux portent sur le contenu graphique. Nous proposons de nous appuyer sur des éléments déjà étudiés tels que la position des cases et des bulles, pour réduire l'espace de recherche et localiser les personnages en fonction de la queue des bulles. L'évaluation de nos différentes contributions à partir de la base eBDtheque montre un taux de détection des queues de bulle de 81.2%, de localisation des personnages allant jusqu'à 85% et un gain d'espace de recherche de plus de 50%.
Keywords: contextual search; document analysis; comics characters
|
|
|
Jaume Garcia, Debora Gil, Francesc Carreras, Sandra Pujades, R.Leta, Xavier Alomar, et al. (2008). Patrons de Normalitat Regional per la Valoració de la Funció del Ventricle Esquerre. In XX Congrés de la Societat Catalana de Cardiologia (60). Barcelona.
Abstract: Les malalties cardiovasculars afecten les propietats contràctils de la banda ventricular i provoquen una variació de la funció del Ventricle Esquerre (VE) . Només els indicadors locals (strains, la deformació del teixit) són capaços de detectar anomalies en territoris específics del VE . Patrons de normalitat regionals d’aquests paràmetres serien d’utilitat a l’hora de valorar-ne la funció .
Presentem un Domini Paramètric Normalitzat (DPN) que permet comparar dades de diferents pacients i definir Patrons de Normalitat Regional (PNR)
|
|
|
Fernando Vilariño, Dimosthenis Karatzas, & Alberto Valcarce. (2018). Libraries as New Innovation Hubs: The Library Living Lab. In 30th ISPIM Innovation Conference.
Abstract: Libraries are in deep transformation both in EU and around the world, and they are thriving within a great window of opportunity for innovation. In this paper, we show how the Library Living Lab in Barcelona participated of this changing scenario and contributed to create the Bibliolab program, where more than 200 public libraries give voice to their users in a global user-centric innovation initiative, using technology as enabling factor. The Library Living Lab is a real 4-helix implementation where Universities, Research Centers, Public Administration, Companies and the Neighbors are joint together to explore how technology transforms the cultural experience of people. This case is an example of scalability and provides reference tools for policy making, sustainability, user engage methodologies and governance. We provide specific examples of new prototypes and services that help to understand how to redefine the role of the Library as a real hub for social innovation.
|
|
|
Marc Bolaños, Maite Garolera, & Petia Radeva. (2015). Object Discovery using CNN Features in Egocentric Videos. In Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 (Vol. 9117, pp. 67–74). LNCS.
Abstract: Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach.
Keywords: Object discovery; Egocentric videos; Lifelogging; CNN
|
|
|
Kai Wang, Chenshen Wu, Andrew Bagdanov, Xialei Liu, Shiqi Yang, Shangling Jui, et al. (2022). Positive Pair Distillation Considered Harmful: Continual Meta Metric Learning for Lifelong Object Re-Identification. In 33rd British Machine Vision Conference.
Abstract: Lifelong object re-identification incrementally learns from a stream of re-identification tasks. The objective is to learn a representation that can be applied to all tasks and that generalizes to previously unseen re-identification tasks. The main challenge is that at inference time the representation must generalize to previously unseen identities. To address this problem, we apply continual meta metric learning to lifelong object re-identification. To prevent forgetting of previous tasks, we use knowledge distillation and explore the roles of positive and negative pairs. Based on our observation that the distillation and metric losses are antagonistic, we propose to remove positive pairs from distillation to robustify model updates. Our method, called Distillation without Positive Pairs (DwoPP), is evaluated on extensive intra-domain experiments on person and vehicle re-identification datasets, as well as inter-domain experiments on the LReID benchmark. Our experiments demonstrate that DwoPP significantly outperforms the state-of-the-art.
|
|
|
Armin Mehri, Parichehr Behjati Ardakani, & Angel Sappa. (2021). MPRNet: Multi-Path Residual Network for Lightweight Image Super Resolution. In IEEE Winter Conference on Applications of Computer Vision (pp. 2703–2712).
Abstract: Lightweight super resolution networks have extremely importance for real-world applications. In recent years several SR deep learning approaches with outstanding achievement have been introduced by sacrificing memory and computational cost. To overcome this problem, a novel lightweight super resolution network is proposed, which improves the SOTA performance in lightweight SR and performs roughly similar to computationally expensive networks. Multi-Path Residual Network designs with a set of Residual concatenation Blocks stacked with Adaptive Residual Blocks: ($i$) to adaptively extract informative features and learn more expressive spatial context information; ($ii$) to better leverage multi-level representations before up-sampling stage; and ($iii$) to allow an efficient information and gradient flow within the network. The proposed architecture also contains a new attention mechanism, Two-Fold Attention Module, to maximize the representation ability of the model. Extensive experiments show the superiority of our model against other SOTA SR approaches.
|
|
|
Adria Ruiz, Joost Van de Weijer, & Xavier Binefa. (2015). From emotions to action units with hidden and semi-hidden-task learning. In 16th IEEE International Conference on Computer Vision (pp. 3703–3711).
Abstract: Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units)for which samples are not available but, in contrast, training data is easier to obtain from a set of related VisibleTasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.
|
|