|
Marta Diez-Ferrer, Debora Gil, Elena Carreño, Susana Padrones, Samantha Aso, Vanesa Vicens, et al. (2016). Positive Airway Pressure-Enhanced CT to Improve Virtual Bronchoscopic Navigation. CHEST - Chest Journal, 150(4), 1003A.
|
|
|
Kai Wang, Chenshen Wu, Andrew Bagdanov, Xialei Liu, Shiqi Yang, Shangling Jui, et al. (2022). Positive Pair Distillation Considered Harmful: Continual Meta Metric Learning for Lifelong Object Re-Identification. In 33rd British Machine Vision Conference.
Abstract: Lifelong object re-identification incrementally learns from a stream of re-identification tasks. The objective is to learn a representation that can be applied to all tasks and that generalizes to previously unseen re-identification tasks. The main challenge is that at inference time the representation must generalize to previously unseen identities. To address this problem, we apply continual meta metric learning to lifelong object re-identification. To prevent forgetting of previous tasks, we use knowledge distillation and explore the roles of positive and negative pairs. Based on our observation that the distillation and metric losses are antagonistic, we propose to remove positive pairs from distillation to robustify model updates. Our method, called Distillation without Positive Pairs (DwoPP), is evaluated on extensive intra-domain experiments on person and vehicle re-identification datasets, as well as inter-domain experiments on the LReID benchmark. Our experiments demonstrate that DwoPP significantly outperforms the state-of-the-art.
|
|
|
Miguel Reyes, Albert Clapes, Luis Felipe Mejia, Jose Ramirez, Juan R Revilla, & Sergio Escalera. (2012). Posture Analysis and Range of Movement Estimation using Depth Maps. In 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis (Vol. 7854, pp. 97–105). Springer Berlin Heidelberg.
Abstract: World Health Organization estimates that 80% of the world population is affected of back pain during his life. Current practices to analyze back problems are expensive, subjective, and invasive. In this work, we propose a novel tool for posture and range of movement estimation based on the analysis of 3D information from depth maps. Given a set of keypoints defined by the user, RGB and depth data are aligned, depth surface is reconstructed, keypoints are matching using a novel point-to-point fitting procedure, and accurate measurements about posture, spinal curvature, and range of movement are computed. The system shows high precision and reliable measurements, being useful for posture reeducation purposes to prevent musculoskeletal disorders, such as back pain, as well as tracking the posture evolution of patients in rehabilitation treatments.
|
|
|
Ignasi Rius, Javier Varona, Xavier Roca, & Jordi Gonzalez. (2006). Posture Constraints for Bayesian Human Motion Tracking. In IV Conference on Articulated Motion and Deformable Objects (AMDO´06), LNCS 4069: 414–423.
|
|
|
F.Negin, Pau Rodriguez, M.Koperski, A.Kerboua, Jordi Gonzalez, J.Bourgeois, et al. (2018). PRAXIS: Towards automatic cognitive assessment using gesture recognition. ESWA - Expert Systems with Applications, 106, 21–35.
Abstract: Praxis test is a gesture-based diagnostic test which has been accepted as diagnostically indicative of cortical pathologies such as Alzheimer’s disease. Despite being simple, this test is oftentimes skipped by the clinicians. In this paper, we propose a novel framework to investigate the potential of static and dynamic upper-body gestures based on the Praxis test and their potential in a medical framework to automatize the test procedures for computer-assisted cognitive assessment of older adults.
In order to carry out gesture recognition as well as correctness assessment of the performances we have recollected a novel challenging RGB-D gesture video dataset recorded by Kinect v2, which contains 29 specific gestures suggested by clinicians and recorded from both experts and patients performing the gesture set. Moreover, we propose a framework to learn the dynamics of upper-body gestures, considering the videos as sequences of short-term clips of gestures. Our approach first uses body part detection to extract image patches surrounding the hands and then, by means of a fine-tuned convolutional neural network (CNN) model, it learns deep hand features which are then linked to a long short-term memory to capture the temporal dependencies between video frames.
We report the results of four developed methods using different modalities. The experiments show effectiveness of our deep learning based approach in gesture recognition and performance assessment tasks. Satisfaction of clinicians from the assessment reports indicates the impact of framework corresponding to the diagnosis.
|
|
|
Karel Paleček, David Geronimo, & Frederic Lerasle. (2012). Pre-attention cues for person detection. In Cognitive Behavioural Systems, COST 2102 International Training School (pp. 225–235). LNCS. Springer Berlin Heidelberg.
Abstract: Current state-of-the-art person detectors have been proven reliable and achieve very good detection rates. However, the performance is often far from real time, which limits their use to low resolution images only. In this paper, we deal with candidate window generation problem for person detection, i.e. we want to reduce the computational complexity of a person detector by reducing the number of regions that has to be evaluated. We base our work on Alexe’s paper [1], which introduced several pre-attention cues for generic object detection. We evaluate these cues in the context of person detection and show that their performance degrades rapidly for scenes containing multiple objects of interest such as pictures from urban environment. We extend this set by new cues, which better suits our class-specific task. The cues are designed to be simple and efficient, so that they can be used in the pre-attention phase of a more complex sliding window based person detector.
|
|
|
David Lloret, Antonio Lopez, & Joan Serrat. (1998). Precise registration of CT and MR volumes based on a new creaseness measure.
|
|
|
Jordi Roca, C. Alejandro Parraga, & Maria Vanrell. (2012). Predicting categorical colour perception in successive colour constancy. In Perception (Vol. 41, 138).
Abstract: Colour constancy is a perceptual mechanism that seeks to keep the colour of objects relatively stable under an illumination shift. Experiments haveshown that its effects depend on the number of colours present in the scene. We
studied categorical colour changes under different adaptation states, in particular, whether the colour categories seen under a chromatically neutral illuminant are the same after a shift in the chromaticity of the illumination. To do this, we developed the chromatic setting paradigm (2011 Journal of Vision11 349), which is as an extension of achromatic setting to colour categories. The paradigm exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. Our experiments were run on a CRT monitor (inside a dark room) under various simulated illuminants and restricting the number of colours of the Mondrian background to three, thus weakening the adaptation effect. Our results show a change in the colour categories present before (under neutral illumination) and after adaptation (under coloured illuminants) with a tendency for adapted colours to be less saturated than before adaptation. This behaviour was predicted by a simple
affine matrix model, adjusted to the chromatic setting results.
|
|
|
Mario Rojas, David Masip, & Jordi Vitria. (2011). Predicting Dominance Judgements Automatically: A Machine Learning Approach. In IEEE International Workshop on Social Behavior Analysis (pp. 939–944).
Abstract: The amount of multimodal devices that surround us is growing everyday. In this context, human interaction and communication have become a focus of attention and a hot topic of research. A crucial element in human relations is the evaluation of individuals with respect to facial traits, what is called a first impression. Studies based on appearance have suggested that personality can be expressed by appearance and the observer may use such information to form judgments. In the context of rapid facial evaluation, certain personality traits seem to have a more pronounced effect on the relations and perceptions inside groups. The perception of dominance has been shown to be an active part of social roles at different stages of life, and even play a part in mate selection. The aim of this paper is to study to what extent this information is learnable from the point of view of computer science. Specifically we intend to determine if judgments of dominance can be learned by machine learning techniques. We implement two different descriptors in order to assess this. The first is the histogram of oriented gradients (HOG), and the second is a probabilistic appearance descriptor based on the frequencies of grouped binary tests. State of the art classification rules validate the performance of both descriptors, with respect to the prediction task. Experimental results show that machine learning techniques can predict judgments of dominance rather accurately (accuracies up to 90%) and that the HOG descriptor may characterize appropriately the information necessary for such task.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2009). Predicting Missing Ratings in Recommender Systems: Adapted Factorization Approach. International Journal of Electronic Commerce, 14(1), 89–108.
Abstract: The paper presents a factorization-based approach to make predictions in recommender systems. These systems are widely used in electronic commerce to help customers find products according to their preferences. Taking into account the customer's ratings of some products available in the system, the recommender system tries to predict the ratings the customer would give to other products in the system. The proposed factorization-based approach uses all the information provided to compute the predicted ratings, in the same way as approaches based on Singular Value Decomposition (SVD). The main advantage of this technique versus SVD-based approaches is that it can deal with missing data. It also has a smaller computational cost. Experimental results with public data sets are provided to show that the proposed adapted factorization approach gives better predicted ratings than a widely used SVD-based approach.
|
|
|
Naila Murray. (2012). Predicting Saliency and Aesthetics in Images: A Bottom-up Perspective (Xavier Otazu, & Maria Vanrell, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: In Part 1 of the thesis, we hypothesize that salient and non-salient image regions can be estimated to be the regions which are enhanced or assimilated in standard low-level color image representations. We prove this hypothesis by adapting a low-level model of color perception into a saliency estimation model. This model shares the three main steps found in many successful models for predicting attention in a scene: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. For such models, integrating spatial information and justifying the choice of various parameter values remain open problems. Our saliency model inherits a principled selection of parameters as well as an innate spatial pooling mechanism from the perception model on which it is based. This pooling mechanism has been fitted using psychophysical data acquired in color-luminance setting experiments. The proposed model outperforms the state-of-the-art at the task of predicting eye-fixations from two datasets. After demonstrating the effectiveness of our basic saliency model, we introduce an improved image representation, based on geometrical grouplets, that enhances complex low-level visual features such as corners and terminations, and suppresses relatively simpler features such as edges. With this improved image representation, the performance of our saliency model in predicting eye-fixations increases for both datasets.
In Part 2 of the thesis, we investigate the problem of aesthetic visual analysis. While a great deal of research has been conducted on hand-crafting image descriptors for aesthetics, little attention so far has been dedicated to the collection, annotation and distribution of ground truth data. Because image aesthetics is complex and subjective, existing datasets, which have few images and few annotations, have significant limitations. To address these limitations, we have introduced a new large-scale database for conducting Aesthetic Visual Analysis, which we call AVA. AVA contains more than 250,000 images, along with a rich variety of annotations. We investigate how the wealth of data in AVA can be used to tackle the challenge of understanding and assessing visual aesthetics by looking into several problems relevant for aesthetic analysis. We demonstrate that by leveraging the data in AVA, and using generic low-level features such as SIFT and color histograms, we can exceed state-of-the-art performance in aesthetic quality prediction tasks.
Finally, we entertain the hypothesis that low-level visual information in our saliency model can also be used to predict visual aesthetics by capturing local image characteristics such as feature contrast, grouping and isolation, characteristics thought to be related to universal aesthetic laws. We use the weighted center-surround responses that form the basis of our saliency model to create a feature vector that describes aesthetics. We also introduce a novel color space for fine-grained color representation. We then demonstrate that the resultant features achieve state-of-the-art performance on aesthetic quality classification.
As such, a promising contribution of this thesis is to show that several vision experiences – low-level color perception, visual saliency and visual aesthetics estimation – may be successfully modeled using a unified framework. This suggests a similar architecture in area V1 for both color perception and saliency and adds evidence to the hypothesis that visual aesthetics appreciation is driven in part by low-level cues.
|
|
|
Guillermo Torres, Jan Rodríguez Dueñas, Sonia Baeza, Antoni Rosell, Carles Sanchez, & Debora Gil. (2023). Prediction of Malignancy in Lung Cancer using several strategies for the fusion of Multi-Channel Pyradiomics Images. In 7th Workshop on Digital Image Processing for Medical and Automotive Industry in the framework of SYNASC 2023.
Abstract: This study shows the generation process and the subsequent study of the representation space obtained by extracting GLCM texture features from computer-aided tomography (CT) scans of pulmonary nodules (PN). For this, data from 92 patients from the Germans Trias i Pujol University Hospital were used. The workflow focuses on feature extraction using Pyradiomics and the VGG16 Convolutional Neural Network (CNN). The aim of the study is to assess whether the data obtained have a positive impact on the diagnosis of lung cancer (LC). To design a machine learning (ML) model training method that allows generalization, we train SVM and neural network (NN) models, evaluating diagnosis performance using metrics defined at slice and nodule level.
|
|
|
Cristina Cañero, Fernando Vilariño, & Petia Radeva. (2002). Predictive (un) distortion model and 3D Reconstruction by Biplane Snakes. IEEE Transactions on Medical Imaging (IF: 2.911).
|
|
|
Cristina Cañero, & Petia Radeva. (2002). Predictive (un)distortion model for 3D reconstruction purpouses.
|
|
|
Matthias S. Keil, Agata Lapedriza, David Masip, & Jordi Vitria. (2008). Preferred Spatial Frequencies for Human Face Processing Are Associated with Optimal Class Discrimination in the Machine. PLoS ONE 3(7):e2590, DOI:10.1371/journal.pone.0002590.
|
|