Patricia Suarez, Angel Sappa, & Boris X. Vintimilla. (2021). Deep learning-based vegetation index estimation. In A.Solanki, A.Nayyar, & M.Naved (Eds.), Generative Adversarial Networks for Image-to-Image Translation (pp. 205–234). Elsevier.
|
Idoia Ruiz. (2022). Deep Metric Learning for re-identification, tracking and hierarchical novelty detection (Joan Serrat, Ed.). Ph.D. thesis, , .
Abstract: Metric learning refers to the problem in machine learning of learning a distance or similarity measurement to compare data. In particular, deep metric learning involves learning a representation, also referred to as embedding, such that in the embedding space data samples can be compared based on the distance, directly providing a similarity measure. This step is necessary to perform several tasks in computer vision. It allows to perform the classification of images, regions or pixels, re-identification, out-of-distribution detection, object tracking in image sequences and any other task that requires computing a similarity score for their solution. This thesis addresses three specific problems that share this common requirement. The first one is person re-identification. Essentially, it is an image retrieval task that aims at finding instances of the same person according to a similarity measure. We first compare in terms of accuracy and efficiency, classical metric learning to basic deep learning based methods for this problem. In this context, we also study network distillation as a strategy to optimize the trade-off between accuracy and speed at inference time. The second problem we contribute to is novelty detection in image classification. It consists in detecting samples of novel classes, i.e. never seen during training. However, standard novelty detection does not provide any information about the novel samples besides they are unknown. Aiming at more informative outputs, we take advantage from the hierarchical taxonomies that are intrinsic to the classes. We propose a metric learning based approach that leverages the hierarchical relationships among classes during training, being able to predict the parent class for a novel sample in such hierarchical taxonomy. Our third contribution is in multi-object tracking and segmentation. This joint task comprises classification, detection, instance segmentation and tracking. Tracking can be formulated as a retrieval problem to be addressed with metric learning approaches. We tackle the existing difficulty in academic research that is the lack of annotated benchmarks for this task. To this matter, we introduce the problem of weakly supervised multi-object tracking and segmentation, facing the challenge of not having available ground truth for instance segmentation. We propose a synergistic training strategy that benefits from the knowledge of the supervised tasks that are being learnt simultaneously.
|
Mohammad A. Haque, Ruben B. Bautista, Kamal Nasrollahi, Sergio Escalera, Christian B. Laursen, Ramin Irani, et al. (2018). Deep Multimodal Pain Recognition: A Database and Comparision of Spatio-Temporal Visual Modalities, Faces and Gestures. In 13th IEEE Conference on Automatic Face and Gesture Recognition (pp. 250–257).
Abstract: Pain is a symptom of many disorders associated with actual or potential tissue damage in human body. Managing pain is not only a duty but also highly cost prone. The most primitive state of pain management is the assessment of pain. Traditionally it was accomplished by self-report or visual inspection by experts. However, automatic pain assessment systems from facial videos are also rapidly evolving due to the need of managing pain in a robust and cost effective way. Among different challenges of automatic pain assessment from facial video data two issues are increasingly prevalent: first, exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data on shallow learning scenarios. However, employing deep learning techniques for spatio-temporal analysis considering Depth (D) and Thermal (T) along with RGB has high potential in this area. In this paper, we present the first state-of-the-art publicly available database, 'Multimodal Intensity Pain (MIntPAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate.
|
Pau Rodriguez, Guillem Cucurull, Jordi Gonzalez, Josep M. Gonfaus, Kamal Nasrollahi, Thomas B. Moeslund, et al. (2017). Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification. Cyber - IEEE Transactions on cybernetics, , 1–11.
Abstract: Pain is an unpleasant feeling that has been shown to be an important factor for the recovery of patients. Since this is costly in human resources and difficult to do objectively, there is the need for automatic systems to measure it. In this paper, contrary to current state-of-the-art techniques in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first uses convolutional neural networks (CNNs) to learn facial features from VGG_Faces, which are then linked to a long short-term memory to exploit the temporal relation between video frames. We further compare the performances of using the so popular schema based on the canonically normalized appearance versus taking into account the whole image. As a result, we outperform current state-of-the-art area under the curve performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also report competitive results in the Cohn Kanade+ facial expression database.
|
Hugo Bertiche, Meysam Madadi, & Sergio Escalera. (2021). Deep Parametric Surfaces for 3D Outfit Reconstruction from Single View Image. In 16th IEEE International Conference on Automatic Face and Gesture Recognition (pp. 1–8).
Abstract: We present a methodology to retrieve analytical surfaces parametrized as a neural network. Previous works on 3D reconstruction yield point clouds, voxelized objects or meshes. Instead, our approach yields 2-manifolds in the euclidean space through deep learning. To this end, we implement a novel formulation for fully connected layers as parametrized manifolds that allows continuous predictions with differential geometry. Based on this property we propose a novel smoothness loss. Results on CLOTH3D++ dataset show the possibility to infer different topologies and the benefits of the smoothness term based on differential geometry.
|
Fahad Shahbaz Khan, Muhammad Anwer Rao, Joost Van de Weijer, Michael Felsberg, & J.Laaksonen. (2015). Deep semantic pyramids for human attributes and action recognition. In Image Analysis, Proceedings of 19th Scandinavian Conference , SCIA 2015 (Vol. 9127, pp. 341–353). Springer International Publishing.
Abstract: Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features.
We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of 17.2%, 13.9%, 24.3% and 22.6% compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature.
Keywords: Action recognition; Human attributes; Semantic pyramids
|
Rada Deeb, Joost Van de Weijer, Damien Muselet, Mathieu Hebert, & Alain Tremeau. (2019). Deep spectral reflectance and illuminant estimation from self-interreflections. JOSA A - Journal of the Optical Society of America A, 31(1), 105–114.
Abstract: In this work, we propose a convolutional neural network based approach to estimate the spectral reflectance of a surface and spectral power distribution of light from a single RGB image of a V-shaped surface. Interreflections happening in a concave surface lead to gradients of RGB values over its area. These gradients carry a lot of information concerning the physical properties of the surface and the illuminant. Our network is trained with only simulated data constructed using a physics-based interreflection model. Coupling interreflection effects with deep learning helps to retrieve the spectral reflectance under an unknown light and to estimate spectral power distribution of this light as well. In addition, it is more robust to the presence of image noise than classical approaches. Our results show that the proposed approach outperforms state-of-the-art learning-based approaches on simulated data. In addition, it gives better results on real data compared to other interreflection-based approaches.
|
Ciprian Corneanu, Meysam Madadi, & Sergio Escalera. (2018). Deep Structure Inference Network for Facial Action Unit Recognition. In 15th European Conference on Computer Vision (Vol. 11216, pp. 309–324). LNCS.
Abstract: Facial expressions are combinations of basic components called Action Units (AU). Recognizing AUs is key for general facial expression analysis. Recently, efforts in automatic AU recognition have been dedicated to learning combinations of local features and to exploiting correlations between AUs. We propose a deep neural architecture that tackles both problems by combining learned local and global features in its initial stages and replicating a message passing algorithm between classes similar to a graphical model inference approach in later stages. We show that by training the model end-to-end with increased supervision we improve state-of-the-art by 5.3% and 8.2% performance on BP4D and DISFA datasets, respectively.
Keywords: Computer Vision; Machine Learning; Deep Learning; Facial Expression Analysis; Facial Action Units; Structure Inference
|
Meysam Madadi, Hugo Bertiche, & Sergio Escalera. (2021). Deep unsupervised 3D human body reconstruction from a sparse set of landmarks. IJCV - International Journal of Computer Vision, 129, 2499–2512.
Abstract: In this paper we propose the first deep unsupervised approach in human body reconstruction to estimate body surface from a sparse set of landmarks, so called DeepMurf. We apply a denoising autoencoder to estimate missing landmarks. Then we apply an attention model to estimate body joints from landmarks. Finally, a cascading network is applied to regress parameters of a statistical generative model that reconstructs body. Our set of proposed loss functions allows us to train the network in an unsupervised way. Results on four public datasets show that our approach accurately reconstructs the human body from real world mocap data.
|
Yaxing Wang, Lu Yu, & Joost Van de Weijer. (2020). DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs. In 34th Conference on Neural Information Processing Systems.
Abstract: Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the shallow layers and (b) semantic information extracted from the deep layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs. Specifically, we leverage the discriminator of a pre-trained GANs (i.e. BigGAN or StyleGAN) to initialize both the encoder and the discriminator and the pre-trained generator to initialize the generator of our model. Applying knowledge transfer leads to an alignment problem between the encoder and generator. We introduce an adaptor network to address this. On many-class image-to-image translation on three datasets (Animal faces, Birds, and Foods) we decrease mFID by at least 35% when compared to the state-of-the-art. Furthermore, we qualitatively and quantitatively demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets. Finally, we are the first to perform I2I translations for domains with over 100 classes.
|
Margarita Torre, Beatriz Remeseiro, Petia Radeva, & Fernando Martinez. (2020). DeepNEM: Deep Network Energy-Minimization for Agricultural Field Segmentation. JSTAEOR - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13, 726–737.
Abstract: One of the main characteristics of agricultural fields is that the appearance of different crops and their growth status, in an aerial image, is varied, and has a wide range of radiometric values and high level of variability. The extraction of these fields and their monitoring are activities that require a high level of human intervention. In this article, we propose a novel automatic algorithm, named deep network energy-minimization (DeepNEM), to extract agricultural fields in aerial images. The model-guided process selects the most relevant image clues extracted by a deep network, completes them and finally generates regions that represent the agricultural fields under a minimization scheme. DeepNEM has been tested over a broad range of fields in terms of size, shape, and content. Different measures were used to compare the DeepNEM with other methods, and to prove that it represents an improved approach to achieve a high-quality segmentation of agricultural fields. Furthermore, this article also presents a new public dataset composed of 1200 images with their parcels boundaries annotations.
|
Hugo Bertiche, Meysam Madadi, Emilio Tylson, & Sergio Escalera. (2021). DeePSD: Automatic Deep Skinning And Pose Space Deformation For 3D Garment Animation. In 19th IEEE International Conference on Computer Vision (pp. 5471–5480).
Abstract: We present a novel solution to the garment animation problem through deep learning. Our contribution allows animating any template outfit with arbitrary topology and geometric complexity. Recent works develop models for garment edition, resizing and animation at the same time by leveraging the support body model (encoding garments as body homotopies). This leads to complex engineering solutions that suffer from scalability, applicability and compatibility. By limiting our scope to garment animation only, we are able to propose a simple model that can animate any outfit, independently of its topology, vertex order or connectivity. Our proposed architecture maps outfits to animated 3D models into the standard format for 3D animation (blend weights and blend shapes matrices), automatically providing of compatibility with any graphics engine. We also propose a methodology to complement supervised learning with an unsupervised physically based learning that implicitly solves collisions and enhances cloth quality.
|
C. Cortes. (2001). Definicio d´un sensor de visio artificial per a l´ajust automatic de tintes en la impressio de pape.
|
Petia Radeva, A.Amini, J.Huang, & Enric Marti. (1996). Deformable B-Solids and Implicit Snakes for Localization and Tracking of SPAMM MRI-Data. In Workshop on Mathematical Methods in Biomedical Image Analysis (pp. 192–201). IEEE Computer Society.
Abstract: To date, MRI-SPAMM data from different image slices have been analyzed independently. In this paper, we propose an approach for 3D tag localization and tracking of SPAMM data by a novel deformable B-solid. The solid is defined in terms of a 3D tensor product B-spline. The isoparametric curves of the B-spline solid have special importance. These are termed implicit snakes as they deform under image forces from tag lines in different image slices. The localization and tracking of tag lines is performed under constraints of continuity and smoothness of the B-solid. The framework unifies the problems of localization, and displacement fitting and interpolation into the same procedure utilizing B-spline bases for interpolation. To track motion from boundaries and restrict image forces to the myocardium, a volumetric model is employed as a pair of coupled endocardial and epicardial B-spline surfaces. To recover deformations in the LV an energy-minimization problem is posed where both tag and ...
|
Petia Radeva, Amir Amini, Jintao Huang, & Enric Marti. (1996). Deformable B-Solids: application for localization and tracking of MRI-SPAMM data. CVC (UAB).
Abstract: To date, MRI-SPAMM data from different image slices have been analyzed independently. In this paper, we propose an approach for 3D tag localization and tracking of SPAMM data by a novel deformable B-solid. The solid is defined in terms of a 3D tensor product B-spline. The isoparametric curves of the B-spline solid have special importance. These are termed implicit snakes as they deform under image forces from tag lines in different image slices. The localization and tracking of tag lines is performed under constraints of continuity and smoothness of the B-solid. The framework unifies the problems of localization, and displacement fitting and interpolation into the same procedure utilizing B-spline bases for interpolation. To track motion from boundaries and restrict image forces to the myocardium, a volumetric model is employed as a pair of coupled endocardial and epicardial B-spline surfaces. To recover deformations in the LV an energy-minimization problem is posed where both tag and ...
|