|
Manuel Carbonell, Pau Riba, Mauricio Villegas, Alicia Fornes, & Josep Llados. (2020). Named Entity Recognition and Relation Extraction with Graph Neural Networks in Semi Structured Documents. In 25th International Conference on Pattern Recognition.
Abstract: The use of administrative documents to communicate and leave record of business information requires of methods
able to automatically extract and understand the content from
such documents in a robust and efficient way. In addition,
the semi-structured nature of these reports is specially suited
for the use of graph-based representations which are flexible
enough to adapt to the deformations from the different document
templates. Moreover, Graph Neural Networks provide the proper
methodology to learn relations among the data elements in
these documents. In this work we study the use of Graph
Neural Network architectures to tackle the problem of entity
recognition and relation extraction in semi-structured documents.
Our approach achieves state of the art results in the three
tasks involved in the process. Additionally, the experimentation
with two datasets of different nature demonstrates the good
generalization ability of our approach.
|
|
|
M. Li, Xialei Liu, Joost Van de Weijer, & Bogdan Raducanu. (2020). Learning to Rank for Active Learning: A Listwise Approach. In 25th International Conference on Pattern Recognition (pp. 5587–5594).
Abstract: Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications (such as image/video indexing and retrieval, autonomous driving, etc.). The goal of active learning is to automatically select a number of unlabeled samples for annotation (according to a budget), based on an acquisition function, which indicates how valuable a sample is for training the model. The learning loss method is a task-agnostic approach which attaches a module to learn to predict the target loss of unlabeled data, and select data with the highest loss for labeling. In this work, we follow this strategy but we define the acquisition function as a learning to rank problem and rethink the structure of the loss prediction module, using a simple but effective listwise approach. Experimental results on four datasets demonstrate that our method outperforms recent state-of-the-art active learning approaches for both image classification and regression tasks.
|
|
|
Ciprian Corneanu, Meysam Madadi, Sergio Escalera, & Aleix Martinez. (2020). Explainable Early Stopping for Action Unit Recognition. In Faces and Gestures in E-health and welfare workshop (pp. 693–699).
Abstract: A common technique to avoid overfitting when training deep neural networks (DNN) is to monitor the performance in a dedicated validation data partition and to stop
training as soon as it saturates. This only focuses on what the model does, while completely ignoring what happens inside it.
In this work, we open the “black-box” of DNN in order to perform early stopping. We propose to use a novel theoretical framework that analyses meso-scale patterns in the topology of the functional graph of a network while it trains. Based on it,
we decide when it transitions from learning towards overfitting in a more explainable way. We exemplify the benefits of this approach on a state-of-the art custom DNN that jointly learns local representations and label structure employing an ensemble of dedicated subnetworks. We show that it is practically equivalent in performance to early stopping with patience, the standard early stopping algorithm in the literature. This proves beneficial for AU recognition performance and provides new insights into how learning of AUs occurs in DNNs.
|
|
|
Anna Esposito, Terry Amorese, Nelson Maldonato, Alessandro Vinciarelli, Maria Ines Torres, Sergio Escalera, et al. (2020). Seniors’ ability to decode differently aged facial emotional expressions. In Faces and Gestures in E-health and welfare workshop (pp. 716–722).
|
|
|
Anna Esposito, Italia Cirillo, Antonietta Esposito, Leopoldina Fortunati, Gian Luca Foresti, Sergio Escalera, et al. (2020). Impairments in decoding facial and vocal emotional expressions in high functioning autistic adults and adolescents. In Faces and Gestures in E-health and welfare workshop (pp. 667–674).
|
|
|
Josep Famadas, Meysam Madadi, Cristina Palmero, & Sergio Escalera. (2020). Generative Video Face Reenactment by AUs and Gaze Regularization. In 15th IEEE International Conference on Automatic Face and Gesture Recognition (pp. 444–451).
Abstract: In this work, we propose an encoder-decoder-like architecture to perform face reenactment in image sequences. Our goal is to transfer the training subject identity to a given test subject. We regularize face reenactment by facial action unit intensity and 3D gaze vector regression. This way, we enforce the network to transfer subtle facial expressions and eye dynamics, providing a more lifelike result. The proposed encoder-decoder receives as input the previous sequence frame stacked to the current frame image of facial landmarks. Thus, the generated frames benefit from appearance and geometry, while keeping temporal coherence for the generated sequence. At test stage, a new target subject with the facial performance of the source subject and the appearance of the training subject is reenacted. Principal component analysis is applied to project the test subject geometry to the closest training subject geometry before reenactment. Evaluation of our proposal shows faster convergence, and more accurate and realistic results in comparison to other architectures without action units and gaze regularization.
|
|
|
Carlos Martin-Isla, Maryam Asadi-Aghbolaghi, Polyxeni Gkontra, Victor M. Campello, Sergio Escalera, & Karim Lekadir. (2020). Stacked BCDU-net with semantic CMR synthesis: application to Myocardial Pathology Segmentation challenge. In MYOPS challenge and workshop.
|
|
|
Hugo Bertiche, Meysam Madadi, & Sergio Escalera. (2020). CLOTH3D: Clothed 3D Humans. In 16th European Conference on Computer Vision.
Abstract: This work presents CLOTH3D, the first big scale synthetic dataset of 3D clothed human sequences. CLOTH3D contains a large variability on garment type, topology, shape, size, tightness and fabric. Clothes are simulated on top of thousands of different pose sequences and body shapes, generating realistic cloth dynamics. We provide the dataset with a generative model for cloth generation. We propose a Conditional Variational Auto-Encoder (CVAE) based on graph convolutions (GCVAE) to learn garment latent spaces. This allows for realistic generation of 3D garments on top of SMPL model for any pose and shape.
|
|
|
Reza Azad, Maryam Asadi-Aghbolaghi, Mahmood Fathy, & Sergio Escalera. (2020). Attention Deeplabv3+: Multi-level Context Attention Mechanism for Skin Lesion Segmentation. In Bioimage computation workshop.
|
|
|
Petia Radeva. (2020). Uncertainty Modeling within an End-to-end Framework for Food Image Analysis. In 1st DELTA.
|
|
|
Mariona Caros, Maite Garolera, Petia Radeva, & Xavier Giro. (2020). Automatic Reminiscence Therapy for Dementia. In 10th ACM International Conference on Multimedia Retrieval (pp. 383–387).
Abstract: With people living longer than ever, the number of cases with dementia such as Alzheimer's disease increases steadily. It affects more than 46 million people worldwide, and it is estimated that in 2050 more than 100 million will be affected. While there are not effective treatments for these terminal diseases, therapies such as reminiscence, that stimulate memories from the past are recommended. Currently, reminiscence therapy takes place in care homes and is guided by a therapist or a carer. In this work, we present an AI-based solution to automatize the reminiscence therapy, which consists in a dialogue system that uses photos as input to generate questions. We run a usability case study with patients diagnosed of mild cognitive impairment that shows they found the system very entertaining and challenging. Overall, this paper presents how reminiscence therapy can be automatized by using machine learning, and deployed to smartphones and laptops, making the therapy more accessible to every person affected by dementia.
|
|
|
Soumick Chatterjee, Fatima Saad, Chompunuch Sarasaen, Suhita Ghosh, Rupali Khatun, Petia Radeva, et al. (2020). Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray Images.
Abstract: CoRR abs/2006.02570
The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosis of infected patients. Medical imaging such as X-ray and Computed Tomography (CT) combined with the potential of Artificial Intelligence (AI) plays an essential role in supporting the medical staff in the diagnosis process. Thereby, the use of five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their Ensemble have been used in this paper, to classify COVID-19, pneumoniæ and healthy subjects using Chest X-Ray. Multi-label classification was performed to predict multiple pathologies for each patient, if present. Foremost, the interpretability of each of the networks was thoroughly studied using techniques like occlusion, saliency, input X gradient, guided backpropagation, integrated gradients, and DeepLIFT. The mean Micro-F1 score of the models for COVID-19 classifications ranges from 0.66 to 0.875, and is 0.89 for the Ensemble of the network models. The qualitative results depicted the ResNets to be the most interpretable model.
|
|
|
Estefania Talavera, Andreea Glavan, Alina Matei, & Petia Radeva. (2020). Eating Habits Discovery in Egocentric Photo-streams.
Abstract: CoRR abs/2009.07646
Eating habits are learned throughout the early stages of our lives. However, it is not easy to be aware of how our food-related routine affects our healthy living. In this work, we address the unsupervised discovery of nutritional habits from egocentric photo-streams. We build a food-related behavioural pattern discovery model, which discloses nutritional routines from the activities performed throughout the days. To do so, we rely on Dynamic-Time-Warping for the evaluation of similarity among the collected days. Within this framework, we present a simple, but robust and fast novel classification pipeline that outperforms the state-of-the-art on food-related image classification with a weighted accuracy and F-score of 70% and 63%, respectively. Later, we identify days composed of nutritional activities that do not describe the habits of the person as anomalies in the daily life of the user with the Isolation Forest method. Furthermore, we show an application for the identification of food-related scenes when the camera wearer eats in isolation. Results have shown the good performance of the proposed model and its relevance to visualize the nutritional habits of individuals.
|
|
|
Shiqi Yang, Yaxing Wang, Joost Van de Weijer, & Luis Herranz. (2020). Unsupervised Domain Adaptation without Source Data by Casting a BAIT.
Abstract: arXiv:2010.12427
Unsupervised domain adaptation (UDA) aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain. Existing UDA methods require access to source data during adaptation, which may not be feasible in some real-world applications. In this paper, we address the source-free unsupervised domain adaptation (SFUDA) problem, where only the source model is available during the adaptation. We propose a method named BAIT to address SFUDA. Specifically, given only the source model, with the source classifier head fixed, we introduce a new learnable classifier. When adapting to the target domain, class prototypes of the new added classifier will act as a bait. They will first approach the target features which deviate from prototypes of the source classifier due to domain shift. Then those target features are pulled towards the corresponding prototypes of the source classifier, thus achieving feature alignment with the source classifier in the absence of source data. Experimental results show that the proposed method achieves state-of-the-art performance on several benchmark datasets compared with existing UDA and SFUDA methods.
|
|
|
Shiqi Yang, Kai Wang, Luis Herranz, & Joost Van de Weijer. (2020). Simple and effective localized attribute representations for zero-shot learning.
Abstract: arXiv:2006.05938
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions. Some recent papers have shown the importance of localized features together with fine-tuning the feature extractor to obtain discriminative and transferable features. However, these methods require complex attention or part detection modules to perform explicit localization in the visual space. In contrast, in this paper we propose localizing representations in the semantic/attribute space, with a simple but effective pipeline where localization is implicit. Focusing on attribute representations, we show that our method obtains state-of-the-art performance on CUB and SUN datasets, and also achieves competitive results on AWA2 dataset, outperforming generally more complex methods with explicit localization in the visual space. Our method can be implemented easily, which can be used as a new baseline for zero shot-learning. In addition, our localized representations are highly interpretable as attribute-specific heatmaps.
|
|