Albert Rial-Farras, Meysam Madadi, & Sergio Escalera. (2021). UV-based reconstruction of 3D garments from a single RGB image. In 16th IEEE International Conference on Automatic Face and Gesture Recognition (pp. 1–8).
Abstract: Garments are highly detailed and dynamic objects made up of particles that interact with each other and with other objects, making the task of 2D to 3D garment reconstruction extremely challenging. Therefore, having a lightweight 3D representation capable of modelling fine details is of great importance. This work presents a deep learning framework based on Generative Adversarial Networks (GANs) to reconstruct 3D garment models from a single RGB image. It has the peculiarity of using UV maps to represent 3D data, a lightweight representation capable of dealing with high-resolution details and wrinkles. With this model and kind of 3D representation, we achieve state-of-the-art results on the CLOTH3D++ dataset, generating good quality and realistic garment reconstructions regardless of the garment topology and shape, human pose, occlusions and lightning.
|
Joan Arnedo-Moreno, D. Bañeres, Xavier Baro, S. Caballe, S. Guerrero, L. Porta, et al. (2014). Va-ID: A trust-based virtual assessment system. In 6th International Conference on Intelligent Networking and Collaborative Systems (pp. 328–335).
Abstract: Even though online education is a very important pillar of lifelong education, institutions are still reluctant to wager for a fully online educational model. At the end, they keep relying on on-site assessment systems, mainly because fully virtual alternatives do not have the deserved social recognition or credibility. Thus, the design of virtual assessment systems that are able to provide effective proof of student authenticity and authorship and the integrity of the activities in a scalable and cost efficient manner would be very helpful. This paper presents ValID, a virtual assessment approach based on a continuous trust level evaluation between students and the institution. The current trust level serves as the main mechanism to dynamically decide which kind of controls a given student should be subjected to, across different courses in a degree. The main goal is providing a fair trade-off between security, scalability and cost, while maintaining the perceived quality of the educational model.
|
Jose Ramirez Moreno, Juan R Revilla, Miguel Reyes, & Sergio Escalera. (2016). Validación del Software ADIBAS asociado al sensor Kinect de Microsoft para la evaluación de la posición corporal. In 4th Congreso WCPT-SAR.
|
Ferran Poveda, Jaume Garcia, Enric Marti, & Debora Gil. (2010). Validation of the myocardial architecture in DT-MRI tractography. In Medical Image Computing in Catalunya: Graduate Student Workshop (pp. 29–30). Girona (Spain).
Abstract: Deep understanding of myocardial structure may help to link form and funcion of the heart unraveling crucial knowledge for medical and surgical clinical procedures and studies. In this work we introduce two visualization techniques based on DT-MRI streamlining able to decipher interesting properties of the architectural organization of the heart.
|
Jaume Garcia, Debora Gil, Sandra Pujades, & Francesc Carreras. (2008). Valoracion de la Funcion del Ventriculo Izquierdo mediante Modelos Regionales Hiperparametricos. Revista Española de Cardiologia, 61(3), 79.
Abstract: La mayoría de la enfermedades cardiovasculares afectan a las propiedades contráctiles de la banda ventricular helicoidal. Esto se refleja en una variación del comportamiento normal de la función ventricular. Parámetros locales tales como los strains, o la deformación experimentada por el tejido, son indicadores capaces de detectar anomalías funcionales en territorios específicos. A menudo, dichos parámetros son considerados de forma separada. En este trabajo presentamos un marco computacional (el Dominio Paramétrico Normalizado, DPN) que permite integrarlos en hiperparámetros funcionales y estudiar sus rangos de normalidad. Dichos rangos permiten valorar de forma objetiva la función regional de cualquier nuevo paciente. Para ello, consideramos secuencias de resonancia magnética etiquetada a nivel basal, medio y apical. Los hiperparámetros se obtienen a partir del movimiento intramural del VI estimado mediante el método Harmonic Phase Flow. El DPN se define a partir de en una parametrización del Ventrículo Izquierdo (VI) en sus coordenadas radiales y circunferencial basada en criterios anatómicos. El paso de los hiperparámetros al DPN hace posible la comparación entre distintos pacientes. Los rangos de normalidad se definen mediante análisis estadístico de valores de voluntarios sanos en 45 regiones del DPN a lo largo de 9 fases sistólicas. Se ha usado un conjunto de 19 (14 H; E: 30.7±7.5) voluntarios sanos para crear los patrones de normalidad y se han validado usando 2 controles sanos y 3 pacientes afectados de contractilidad global reducida. Para los controles los resultados regionales se han ajustado dentro de la normalidad, mientras que para los pacientes se han obtenido valores anormales en las zonas descritas, localizando y cuantificando así el diagnóstico empírico.
|
Fei Yang, Luis Herranz, Joost Van de Weijer, Jose Antonio Iglesias, Antonio Lopez, & Mikhail Mozerov. (2020). Variable Rate Deep Image Compression with Modulated Autoencoder. SPL - IEEE Signal Processing Letters, 27, 331–335.
Abstract: Variable rate is a requirement for flexible and adaptable image and video compression. However, deep image compression methods (DIC) are optimized for a single fixed rate-distortion (R-D) tradeoff. While this can be addressed by training multiple models for different tradeoffs, the memory requirements increase proportionally to the number of models. Scaling the bottleneck representation of a shared autoencoder can provide variable rate compression with a single shared autoencoder. However, the R-D performance using this simple mechanism degrades in low bitrates, and also shrinks the effective range of bitrates. To address these limitations, we formulate the problem of variable R-D optimization for DIC, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific R-D tradeoff via a modulation network. Jointly training this modulated autoencoder and the modulation network provides an effective way to navigate the R-D operational curve. Our experiments show that the proposed method can achieve almost the same R-D performance of independent models with significantly fewer parameters.
|
Daniel Ponsa, & Antonio Lopez. (2009). Variance reduction techniques in particle-based visual contour Tracking. PR - Pattern Recognition, 42(11), 2372–2391.
Abstract: This paper presents a comparative study of three different strategies to improve the performance of particle filters, in the context of visual contour tracking: the unscented particle filter, the Rao-Blackwellized particle filter, and the partitioned sampling technique. The tracking problem analyzed is the joint estimation of the global and local transformation of the outline of a given target, represented following the active shape model approach. The main contributions of the paper are the novel adaptations of the considered techniques on this generic problem, and the quantitative assessment of their performance in extensive experimental work done.
Keywords: Contour tracking; Active shape models; Kalman filter; Particle filter; Importance sampling; Unscented particle filter; Rao-Blackwellization; Partitioned sampling
|
Joan M. Nuñez. (2015). Vascular Pattern Characterization in Colonoscopy Images (Fernando Vilariño, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Colorectal cancer is the third most common cancer worldwide and the second most common malignant tumor in Europe. Screening tests have shown to be very eective in increasing the survival rates since they allow an early detection of polyps. Among the dierent screening techniques, colonoscopy is considered the gold standard although clinical studies mention several problems that have an impact in the quality of the procedure. The navigation through the rectum and colon track can be challenging for the physicians which can increase polyp miss rates. The thorough visualization of the colon track must be ensured so that
the chances of missing lesions are minimized. The visual analysis of colonoscopy images can provide important information to the physicians and support their navigation during the procedure.
Blood vessels and their branching patterns can provide descriptive power to potentially develop biometric markers. Anatomical markers based on blood vessel patterns could be used to identify a particular scene in colonoscopy videos and to support endoscope navigation by generating a sequence of ordered scenes through the dierent colon sections. By verifying the presence of vascular content in the endoluminal scene it is also possible to certify a proper
inspection of the colon mucosa and to improve polyp localization. Considering the potential uses of blood vessel description, this contribution studies the characterization of the vascular content and the analysis of the descriptive power of its branching patterns.
Blood vessel characterization in colonoscopy images is shown to be a challenging task. The endoluminal scene is conformed by several elements whose similar characteristics hinder the development of particular models for each of them. To overcome such diculties we propose the use of the blood vessel branching characteristics as key features for pattern description. We present a model to characterize junctions in binary patterns. The implementation
of the junction model allows us to develop a junction localization method. We
created two data sets including manually labeled vessel information as well as manual ground truths of two types of keypoint landmarks: junctions and endpoints. The proposed method outperforms the available algorithms in the literature in experiments in both, our newly created colon vessel data set, and in DRIVE retinal fundus image data set. In the latter case, we created a manual ground truth of junction coordinates. Since we want to explore the descriptive potential of junctions and vessels, we propose a graph-based approach to
create anatomical markers. In the context of polyp localization, we present a new method to inhibit the in uence of blood vessels in the extraction valley-prole information. The results show that our methodology decreases vessel in
uence, increases polyp information and leads to an improvement in state-of-the-art polyp localization performance. We also propose a polyp-specic segmentation method that outperforms other general and specic approaches.
|
Jaume Gibert. (2012). Vector Space Embedding of Graphs via Statistics of Labelling Information (Ernest Valveny, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Pattern recognition is the task that aims at distinguishing objects among different classes. When such a task wants to be solved in an automatic way a crucial step is how to formally represent such patterns to the computer. Based on the different representational formalisms, we may distinguish between statistical and structural pattern recognition. The former describes objects as a set of measurements arranged in the form of what is called a feature vector. The latter assumes that relations between parts of the underlying objects need to be explicitly represented and thus it uses relational structures such as graphs for encoding their inherent information. Vector spaces are a very flexible mathematical structure that has allowed to come up with several efficient ways for the analysis of patterns under the form of feature vectors. Nevertheless, such a representation cannot explicitly cope with binary relations between parts of the objects and it is restricted to measure the exact same number of features for each pattern under study regardless of their complexity. Graph-based representations present the contrary situation. They can easily adapt to the inherent complexity of the patterns but introduce a problem of high computational complexity, hindering the design of efficient tools to process and analyse patterns.
Solving this paradox is the main goal of this thesis. The ideal situation for solving pattern recognition problems would be to represent the patterns using relational structures such as graphs, and to be able to use the wealthy repository of data processing tools from the statistical pattern recognition domain. An elegant solution to this problem is to transform the graph domain into a vector domain where any processing algorithm can be applied. In other words, by mapping each graph to a point in a vector space we automatically get access to the rich set of algorithms from the statistical domain to be applied in the graph domain. Such methodology is called graph embedding.
In this thesis we propose to associate feature vectors to graphs in a simple and very efficient way by just putting attention on the labelling information that graphs store. In particular, we count frequencies of node labels and of edges between labels. Although their locality, these features are able to robustly represent structurally global properties of graphs, when considered together in the form of a vector. We initially deal with the case of discrete attributed graphs, where features are easy to compute. The continuous case is tackled as a natural generalization of the discrete one, where rather than counting node and edge labelling instances, we count statistics of some representatives of them. We encounter how the proposed vectorial representations of graphs suffer from high dimensionality and correlation among components and we face these problems by feature selection algorithms. We also explore how the diversity of different embedding representations can be exploited in order to boost the performance of base classifiers in a multiple classifier systems framework. An extensive experimental evaluation finally shows how the methodology we propose can be efficiently computed and compete with other graph matching and embedding methodologies.
|
Philippe Dosch, & Josep Llados. (2003). Vectorial Signatures for Symbol Discrimination.
|
Philippe Dosch, & Josep Llados. (2004). Vectorial Signatures for Symbol Discrimination.
|
Patricia Suarez, Angel Sappa, & Boris X. Vintimilla. (2018). Vegetation Index Estimation from Monospectral Images. In 15th International Conference on Images Analysis and Recognition (Vol. 10882, pp. 353–362). LNCS.
Abstract: This paper proposes a novel approach to estimate Normalized Difference Vegetation Index (NDVI) from just the red channel of a RGB image. The NDVI index is defined as the ratio of the difference of the red and infrared radiances over their sum. In other words, information from the red channel of a RGB image and the corresponding infrared spectral band are required for its computation. In the current work the NDVI index is estimated just from the red channel by training a Conditional Generative Adversarial Network (CGAN). The architecture proposed for the generative network consists of a single level structure, which combines at the final layer results from convolutional operations together with the given red channel with Gaussian noise to enhance
details, resulting in a sharp NDVI image. Then, the discriminative model
estimates the probability that the NDVI generated index came from the training dataset, rather than the index automatically generated. Experimental results with a large set of real images are provided showing that a Conditional GAN single level model represents an acceptable approach to estimate NDVI index.
|
Ferran Diego, Daniel Ponsa, Joan Serrat, & Antonio Lopez. (2010). Vehicle geolocalization based on video synchronization. In 13th Annual International Conference on Intelligent Transportation Systems (1511–1516).
Abstract: TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
Keywords: video alignment
|
Daniel Ponsa, & Antonio Lopez. (2007). Vehicle Trajectory Estimation based on Monocular Vision. In 3rd Iberian Conference on Pattern Recognition and Image Analysis, LNCS 4477 (pp. 587–594).
Keywords: vehicle detection
|
Muhammad Muzzamil Luqman, Thierry Brouard, Jean-Yves Ramel, & Josep Llados. (2010). Vers une approche foue of encapsulation de graphes: application a la reconnaissance de symboles. In Colloque International Francophone sur l'Écrit et le Document (pp. 169–184).
Abstract: We present a new methodology for symbol recognition, by employing a structural approach for representing visual associations in symbols and a statistical classifier for recognition. A graphic symbol is vectorized, its topological and geometrical details are encoded by an attributed relational graph and a signature is computed for it. Data adapted fuzzy intervals have been introduced for addressing the sensitivity of structural representations to noise. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set, and is deployed in a supervised learning scenario for recognizing query symbols. Experimental results on pre-segmented 2D linear architectural and electronic symbols from GREC databases are presented.
Keywords: Fuzzy interval; Graph embedding; Bayesian network; Symbol recognition
|