|
Senmao Li, Joost Van de Weijer, Yaxing Wang, Fahad Shahbaz Khan, Meiqin Liu, & Jian Yang. (2023). 3D-Aware Multi-Class Image-to-Image Translation with NeRFs. In 36th IEEE Conference on Computer Vision and Pattern Recognition (pp. 12652–12662).
Abstract: Recent advances in 3D-aware generative models (3D-aware GANs) combined with Neural Radiance Fields (NeRF) have achieved impressive results. However no prior works investigate 3D-aware GANs for 3D consistent multiclass image-to-image (3D-aware 121) translation. Naively using 2D-121 translation methods suffers from unrealistic shape/identity change. To perform 3D-aware multiclass 121 translation, we decouple this learning process into a multiclass 3D-aware GAN step and a 3D-aware 121 translation step. In the first step, we propose two novel techniques: a new conditional architecture and an effective training strategy. In the second step, based on the well-trained multiclass 3D-aware GAN architecture, that preserves view-consistency, we construct a 3D-aware 121 translation system. To further reduce the view-consistency problems, we propose several new techniques, including a U-net-like adaptor network design, a hierarchical representation constrain and a relative regularization loss. In exten-sive experiments on two datasets, quantitative and qualitative results demonstrate that we successfully perform 3D-aware 121 translation with multi-view consistency. Code is available in 3DI2I.
|
|
|
Hugo Bertiche, Niloy J Mitra, Kuldeep Kulkarni, Chun Hao Paul Huang, Tuanfeng Y Wang, Meysam Madadi, et al. (2023). Blowing in the Wind: CycleNet for Human Cinemagraphs from Still Images. In 36th IEEE Conference on Computer Vision and Pattern Recognition (pp. 459–468).
Abstract: Cinemagraphs are short looping videos created by adding subtle motions to a static image. This kind of media is popular and engaging. However, automatic generation of cinemagraphs is an underexplored area and current solutions require tedious low-level manual authoring by artists. In this paper, we present an automatic method that allows generating human cinemagraphs from single RGB images. We investigate the problem in the context of dressed humans under the wind. At the core of our method is a novel cyclic neural network that produces looping cinemagraphs for the target loop duration. To circumvent the problem of collecting real data, we demonstrate that it is possible, by working in the image normal space, to learn garment motion dynamics on synthetic data and generalize to real data. We evaluate our method on both synthetic and real data and demonstrate that it is possible to create compelling and plausible cinemagraphs from single RGB images.
|
|
|
Hassan Ahmed Sial, S. Sancho, Ramon Baldrich, Robert Benavente, & Maria Vanrell. (2018). Color-based data augmentation for Reflectance Estimation. In 26th Color Imaging Conference (pp. 284–289).
Abstract: Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.
|
|
|
Yael Tudela, Ana Garcia Rodriguez, Gloria Fernandez Esparrach, & Jorge Bernal. (2023). Towards Fine-Grained Polyp Segmentation and Classification. In Workshop on Clinical Image-Based Procedures (Vol. 14242, pp. 32–42). LNCS.
Abstract: Colorectal cancer is one of the main causes of cancer death worldwide. Colonoscopy is the gold standard screening tool as it allows lesion detection and removal during the same procedure. During the last decades, several efforts have been made to develop CAD systems to assist clinicians in lesion detection and classification. Regarding the latter, and in order to be used in the exploration room as part of resect and discard or leave-in-situ strategies, these systems must identify correctly all different lesion types. This is a challenging task, as the data used to train these systems presents great inter-class similarity, high class imbalance, and low representation of clinically relevant histology classes such as serrated sessile adenomas.
In this paper, a new polyp segmentation and classification method, Swin-Expand, is introduced. Based on Swin-Transformer, it uses a simple and lightweight decoder. The performance of this method has been assessed on a novel dataset, comprising 1126 high-definition images representing the three main histological classes. Results show a clear improvement in both segmentation and classification performance, also achieving competitive results when tested in public datasets. These results confirm that both the method and the data are important to obtain more accurate polyp representations.
Keywords: Medical image segmentation; Colorectal Cancer; Vision Transformer; Classification
|
|
|
Petia Radeva, Maya Dimitrova, Ch. Roumenin, David Rotger, D. Nikolov, & Juan J. Villanueva. (2004). Integration of Multiple Sensor Modalities in ActiveVessel Cardiology Workstation.
|
|
|
P. Andreeva, Maya Dimitrova, & Petia Radeva. (2004). Data Mining Learning Models and Algorithms for Medical Applications. In 18 Conference Systems for Automation of Engineering and Research (SEAR 2004).
|
|
|
Maya Dimitrova, I. Terziev, Petia Radeva, & Juan J. Villanueva. (2004). Java-Servlet Technology for Building New Web Document Classifiers.
|
|
|
Maya Dimitrova, Petia Radeva, David Rotger, D. Boyadjiev, & Juan J. Villanueva. (2004). Advanced Cardiological Diagnosis via Intelligent Image Analysis.
|
|
|
Sergio Escalera, Oriol Pujol, Eric Laciar, Jordi Vitria, Esther Pueyo, & Petia Radeva. (2008). Coronary Damage Classification of Patients with the Chagas Disease with Error-Correcting Output Codes. In Intelligent Systems, 4th International IEEE Conference, 6–8 setembre 2008. (Vol. 2, 12–17).
Abstract: The Chagaspsila disease is endemic in all Latin America, affecting millions of people in the continent. In order to diagnose and treat the Chagaspsila disease, it is important to detect and measure the coronary damage of the patient. In this paper, we analyze and categorize patients into different groups based on the coronary damage produced by the disease. Based on the features of the heart cycle extracted using high resolution ECG, a multi-class scheme of error-correcting output codes (ECOC) is formulated and successfully applied. The results show that the proposed scheme obtains significant performance improvements compared to previous works and state-of-the-art ECOC designs.
|
|
|
Xose M. Pardo, Petia Radeva, & Juan J. Villanueva. (1999). Self-Training Statistic Snake for Image Segmentation and Tracking..
|
|
|
Miquel Ferrer, Dimosthenis Karatzas, Ernest Valveny, & Horst Bunke. (2009). A Recursive Embedding Approach to Median Graph Computation. In 7th IAPR – TC–15 Workshop on Graph–Based Representations in Pattern Recognition (Vol. 5534, 113–123). LNCS. Springer Berlin Heidelberg.
Abstract: The median graph has been shown to be a good choice to infer a representative of a set of graphs. It has been successfully applied to graph-based classification and clustering. Nevertheless, its computation is extremely complex. Several approaches have been presented up to now based on different strategies. In this paper we present a new approximate recursive algorithm for median graph computation based on graph embedding into vector spaces. Preliminary experiments on three databases show that this new approach is able to obtain better medians than the previous existing approaches.
|
|
|
Debora Gil, Jaume Garcia, Mariano Vazquez, Ruth Aris, & Guilleaume Houzeaux. (2008). Patient-Sensitive Anatomic and Functional 3D Model of the Left Ventricle Function. In 8th World Congress on Computational Mechanichs (WCCM8).
Abstract: Early diagnosis and accurate treatment of Left Ventricle (LV) dysfunction significantly increases the patient survival. Impairment of LV contractility due to cardiovascular diseases is reflected in its motion patterns. Recent advances in medical imaging, such as Magnetic Resonance (MR), have encouraged research on 3D simulation and modelling of the LV dynamics. Most of the existing 3D models [1] consider just the gross anatomy of the LV and restore a truncated ellipse which deforms along the cardiac cycle. The contraction mechanics of any muscle strongly depends on the spatial orientation of its muscular fibers since the motion that the muscle undergoes mainly takes place along the fibers. It follows that such simplified models do not allow evaluation of the heart electro-mechanical function and coupling, which has recently risen as the key point for understanding the LV functionality [2]. In order to thoroughly understand the LV mechanics it is necessary to consider the complete anatomy of the LV given by the orientation of the myocardial fibres in 3D space as described by Torrent Guasp [3].
We propose developing a 3D patient-sensitive model of the LV integrating, for the first time, the ven- tricular band anatomy (fibers orientation), the LV gross anatomy and its functionality. Such model will represent the LV function as a natural consequence of its own ventricular band anatomy. This might be decisive in restoring a proper LV contraction in patients undergoing pace marker treatment.
The LV function is defined as soon as the propagation of the contractile electromechanical pulse has been modelled. In our experiments we have used the wave equation for the propagation of the electric pulse. The electromechanical wave moves on the myocardial surface and should have a conductivity tensor oriented along the muscular fibers. Thus, whatever mathematical model for electric pulse propa- gation [4] we consider, the complete anatomy of the LV should be extracted.
The LV gross anatomy is obtained by processing multi slice MR images recorded for each patient. Information about the myocardial fibers distribution can only be extracted by Diffusion Tensor Imag- ing (DTI), which can not provide in vivo information for each patient. As a first approach, we have
Figure 1: Scheme for the Left Ventricle Patient-Sensitive Model.
computed an average model of fibers from several DTI studies of canine hearts. This rough anatomy is the input for our electro-mechanical propagation model simulating LV dynamics. The average fiber orientation is updated until the simulated LV motion agrees with the experimental evidence provided by the LV motion observed in tagged MR (TMR) sequences. Experimental LV motion is recovered by applying image processing, differential geometry and interpolation techniques to 2D TMR slices [5]. The pipeline in figure 1 outlines the interaction between simulations and experimental data leading to our patient-tailored model.
Keywords: Left Ventricle, Electromechanical Models, Image Processing, Magnetic Resonance.
|
|
|
Aitor Alvarez-Gila, Joost Van de Weijer, & Estibaliz Garrote. (2017). Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB. In 1st International Workshop on Physics Based Vision meets Deep Learning.
Abstract: Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer.
Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However,
most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44:7% and a Relative RMSE drop of 47:0% on the ICVL natural hyperspectral image dataset.
|
|
|
Ivet Rafegas, & Maria Vanrell. (2017). Color representation in CNNs: parallelisms with biological vision. In ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision.
Abstract: Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN
[2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions
of them to quantify color tuning properties of artificial neurons to provide a classification of the network population.
We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first
layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT).
|
|
|
Leonardo Galteri, Dena Bazazian, Lorenzo Seidenari, Marco Bertini, Andrew Bagdanov, Anguelos Nicolaou, et al. (2017). Reading Text in the Wild from Compressed Images. In 1st International workshop on Egocentric Perception, Interaction and Computing.
Abstract: Reading text in the wild is gaining attention in the computer vision community. Images captured in the wild are almost always compressed to varying degrees, depending on application context, and this compression introduces artifacts
that distort image content into the captured images. In this paper we investigate the impact these compression artifacts have on text localization and recognition in the wild. We also propose a deep Convolutional Neural Network (CNN) that can eliminate text-specific compression artifacts and which leads to an improvement in text recognition. Experimental results on the ICDAR-Challenge4 dataset demonstrate that compression artifacts have a significant
impact on text localization and recognition and that our approach yields an improvement in both – especially at high compression rates.
|
|