|
Manuel Carbonell, Pau Riba, Mauricio Villegas, Alicia Fornes, & Josep Llados. (2020). Named Entity Recognition and Relation Extraction with Graph Neural Networks in Semi Structured Documents. In 25th International Conference on Pattern Recognition.
Abstract: The use of administrative documents to communicate and leave record of business information requires of methods
able to automatically extract and understand the content from
such documents in a robust and efficient way. In addition,
the semi-structured nature of these reports is specially suited
for the use of graph-based representations which are flexible
enough to adapt to the deformations from the different document
templates. Moreover, Graph Neural Networks provide the proper
methodology to learn relations among the data elements in
these documents. In this work we study the use of Graph
Neural Network architectures to tackle the problem of entity
recognition and relation extraction in semi-structured documents.
Our approach achieves state of the art results in the three
tasks involved in the process. Additionally, the experimentation
with two datasets of different nature demonstrates the good
generalization ability of our approach.
|
|
|
M. Li, Xialei Liu, Joost Van de Weijer, & Bogdan Raducanu. (2020). Learning to Rank for Active Learning: A Listwise Approach. In 25th International Conference on Pattern Recognition (pp. 5587–5594).
Abstract: Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications (such as image/video indexing and retrieval, autonomous driving, etc.). The goal of active learning is to automatically select a number of unlabeled samples for annotation (according to a budget), based on an acquisition function, which indicates how valuable a sample is for training the model. The learning loss method is a task-agnostic approach which attaches a module to learn to predict the target loss of unlabeled data, and select data with the highest loss for labeling. In this work, we follow this strategy but we define the acquisition function as a learning to rank problem and rethink the structure of the loss prediction module, using a simple but effective listwise approach. Experimental results on four datasets demonstrate that our method outperforms recent state-of-the-art active learning approaches for both image classification and regression tasks.
|
|
|
Ciprian Corneanu, Meysam Madadi, Sergio Escalera, & Aleix Martinez. (2020). Explainable Early Stopping for Action Unit Recognition. In Faces and Gestures in E-health and welfare workshop (pp. 693–699).
Abstract: A common technique to avoid overfitting when training deep neural networks (DNN) is to monitor the performance in a dedicated validation data partition and to stop
training as soon as it saturates. This only focuses on what the model does, while completely ignoring what happens inside it.
In this work, we open the “black-box” of DNN in order to perform early stopping. We propose to use a novel theoretical framework that analyses meso-scale patterns in the topology of the functional graph of a network while it trains. Based on it,
we decide when it transitions from learning towards overfitting in a more explainable way. We exemplify the benefits of this approach on a state-of-the art custom DNN that jointly learns local representations and label structure employing an ensemble of dedicated subnetworks. We show that it is practically equivalent in performance to early stopping with patience, the standard early stopping algorithm in the literature. This proves beneficial for AU recognition performance and provides new insights into how learning of AUs occurs in DNNs.
|
|
|
Anna Esposito, Terry Amorese, Nelson Maldonato, Alessandro Vinciarelli, Maria Ines Torres, Sergio Escalera, et al. (2020). Seniors’ ability to decode differently aged facial emotional expressions. In Faces and Gestures in E-health and welfare workshop (pp. 716–722).
|
|
|
Anna Esposito, Italia Cirillo, Antonietta Esposito, Leopoldina Fortunati, Gian Luca Foresti, Sergio Escalera, et al. (2020). Impairments in decoding facial and vocal emotional expressions in high functioning autistic adults and adolescents. In Faces and Gestures in E-health and welfare workshop (pp. 667–674).
|
|
|
Josep Famadas, Meysam Madadi, Cristina Palmero, & Sergio Escalera. (2020). Generative Video Face Reenactment by AUs and Gaze Regularization. In 15th IEEE International Conference on Automatic Face and Gesture Recognition (pp. 444–451).
Abstract: In this work, we propose an encoder-decoder-like architecture to perform face reenactment in image sequences. Our goal is to transfer the training subject identity to a given test subject. We regularize face reenactment by facial action unit intensity and 3D gaze vector regression. This way, we enforce the network to transfer subtle facial expressions and eye dynamics, providing a more lifelike result. The proposed encoder-decoder receives as input the previous sequence frame stacked to the current frame image of facial landmarks. Thus, the generated frames benefit from appearance and geometry, while keeping temporal coherence for the generated sequence. At test stage, a new target subject with the facial performance of the source subject and the appearance of the training subject is reenacted. Principal component analysis is applied to project the test subject geometry to the closest training subject geometry before reenactment. Evaluation of our proposal shows faster convergence, and more accurate and realistic results in comparison to other architectures without action units and gaze regularization.
|
|
|
Carlos Martin-Isla, Maryam Asadi-Aghbolaghi, Polyxeni Gkontra, Victor M. Campello, Sergio Escalera, & Karim Lekadir. (2020). Stacked BCDU-net with semantic CMR synthesis: application to Myocardial Pathology Segmentation challenge. In MYOPS challenge and workshop.
|
|
|
Hugo Bertiche, Meysam Madadi, & Sergio Escalera. (2020). CLOTH3D: Clothed 3D Humans. In 16th European Conference on Computer Vision.
Abstract: This work presents CLOTH3D, the first big scale synthetic dataset of 3D clothed human sequences. CLOTH3D contains a large variability on garment type, topology, shape, size, tightness and fabric. Clothes are simulated on top of thousands of different pose sequences and body shapes, generating realistic cloth dynamics. We provide the dataset with a generative model for cloth generation. We propose a Conditional Variational Auto-Encoder (CVAE) based on graph convolutions (GCVAE) to learn garment latent spaces. This allows for realistic generation of 3D garments on top of SMPL model for any pose and shape.
|
|
|
Reza Azad, Maryam Asadi-Aghbolaghi, Mahmood Fathy, & Sergio Escalera. (2020). Attention Deeplabv3+: Multi-level Context Attention Mechanism for Skin Lesion Segmentation. In Bioimage computation workshop.
|
|
|
Petia Radeva. (2020). Uncertainty Modeling within an End-to-end Framework for Food Image Analysis. In 1st DELTA.
|
|
|
Martin Menchon, Estefania Talavera, Jose M. Massa, & Petia Radeva. (2020). Behavioural Pattern Discovery from Collections of Egocentric Photo-Streams. In ECCV Workshops (Vol. 12538, pp. 469–484). LNCS.
Abstract: The automatic discovery of behaviour is of high importance when aiming to assess and improve the quality of life of people. Egocentric images offer a rich and objective description of the daily life of the camera wearer. This work proposes a new method to identify a person’s patterns of behaviour from collected egocentric photo-streams. Our model characterizes time-frames based on the context (place, activities and environment objects) that define the images composition. Based on the similarity among the time-frames that describe the collected days for a user, we propose a new unsupervised greedy method to discover the behavioural pattern set based on a novel semantic clustering approach. Moreover, we present a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100k images extracted from 7 users. Results show that behavioural patterns can be discovered to characterize the routine of individuals and consequently their lifestyle.
|
|
|
Mariona Caros, Maite Garolera, Petia Radeva, & Xavier Giro. (2020). Automatic Reminiscence Therapy for Dementia. In 10th ACM International Conference on Multimedia Retrieval (pp. 383–387).
Abstract: With people living longer than ever, the number of cases with dementia such as Alzheimer's disease increases steadily. It affects more than 46 million people worldwide, and it is estimated that in 2050 more than 100 million will be affected. While there are not effective treatments for these terminal diseases, therapies such as reminiscence, that stimulate memories from the past are recommended. Currently, reminiscence therapy takes place in care homes and is guided by a therapist or a carer. In this work, we present an AI-based solution to automatize the reminiscence therapy, which consists in a dialogue system that uses photos as input to generate questions. We run a usability case study with patients diagnosed of mild cognitive impairment that shows they found the system very entertaining and challenging. Overall, this paper presents how reminiscence therapy can be automatized by using machine learning, and deployed to smartphones and laptops, making the therapy more accessible to every person affected by dementia.
|
|
|
Idoia Ruiz, & Joan Serrat. (2020). Rank-based ordinal classification. In 25th International Conference on Pattern Recognition (pp. 8069–8076).
Abstract: Differently from the regular classification task, in ordinal classification there is an order in the classes. As a consequence not all classification errors matter the same: a predicted class close to the groundtruth one is better than predicting a farther away class. To account for this, most previous works employ loss functions based on the absolute difference between the predicted and groundtruth class labels. We argue that there are many cases in ordinal classification where label values are arbitrary (for instance 1. . . C, being C the number of classes) and thus such loss functions may not be the best choice. We instead propose a network architecture that produces not a single class prediction but an ordered vector, or ranking, of all the possible classes from most to least likely. This is thanks to a loss function that compares groundtruth and predicted rankings of these class labels, not the labels themselves. Another advantage of this new formulation is that we can enforce consistency in the predictions, namely, predicted rankings come from some unimodal vector of scores with mode at the groundtruth class. We compare with the state of the art ordinal classification methods, showing
that ours attains equal or better performance, as measured by common ordinal classification metrics, on three benchmark datasets. Furthermore, it is also suitable for a new task on image aesthetics assessment, i.e. most voted score prediction. Finally, we also apply it to building damage assessment from satellite images, providing an analysis of its performance depending on the degree of imbalance of the dataset.
|
|
|
Klara Janousckova, Jiri Matas, Lluis Gomez, & Dimosthenis Karatzas. (2020). Text Recognition – Real World Data and Where to Find Them. In 25th International Conference on Pattern Recognition (pp. 4489–4496).
Abstract: We present a method for exploiting weakly annotated images to improve text extraction pipelines. The approach uses an arbitrary end-to-end text recognition system to obtain text region proposals and their, possibly erroneous, transcriptions. The method includes matching of imprecise transcriptions to weak annotations and an edit distance guided neighbourhood search. It produces nearly error-free, localised instances of scene text, which we treat as “pseudo ground truth” (PGT). The method is applied to two weakly-annotated datasets. Training with the extracted PGT consistently improves the accuracy of a state of the art recognition model, by 3.7% on average, across different benchmark datasets (image domains) and 24.5% on one of the weakly annotated datasets 1 1 Acknowledgements. The authors were supported by Czech Technical University student grant SGS20/171/0HK3/3TJ13, the MEYS VVV project CZ.02.1.01/0.010.0J16 019/0000765 Research Center for Informatics, the Spanish Research project TIN2017-89779-P and the CERCA Programme / Generalitat de Catalunya.
|
|
|
Minesh Mathew, Ruben Tito, Dimosthenis Karatzas, R.Manmatha, & C.V. Jawahar. (2020). Document Visual Question Answering Challenge 2020. In 33rd IEEE Conference on Computer Vision and Pattern Recognition – Short paper.
Abstract: This paper presents results of Document Visual Question Answering Challenge organized as part of “Text and Documents in the Deep Learning Era” workshop, in CVPR 2020. The challenge introduces a new problem – Visual Question Answering on document images. The challenge comprised two tasks. The first task concerns with asking questions on a single document image. On the other hand, the second task is set as a retrieval task where the question is posed over a collection of images. For the task 1 a new dataset is introduced comprising 50,000 questions-answer(s) pairs defined over 12,767 document images. For task 2 another dataset has been created comprising 20 questions over 14,362 document images which share the same document template.
|
|