|
Yaxing Wang, L. Zhang, & Joost Van de Weijer. (2016). Ensembles of generative adversarial networks. In 30th Annual Conference on Neural Information Processing Systems Worshops.
Abstract: Ensembles are a popular way to improve results of discriminative CNNs. The
combination of several networks trained starting from different initializations
improves results significantly. In this paper we investigate the usage of ensembles of GANs. The specific nature of GANs opens up several new ways to construct ensembles. The first one is based on the fact that in the minimax game which is played to optimize the GAN objective the generator network keeps on changing even after the network can be considered optimal. As such ensembles of GANs can be constructed based on the same network initialization but just taking models which have different amount of iterations. These so-called self ensembles are much faster to train than traditional ensembles. The second method, called cascade GANs, redirects part of the training data which is badly modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset we show that ensembles of GANs obtain model probability distributions which better model the data distribution. In addition, we show that these improved results can be obtained at little additional computational cost.
|
|
|
Guim Perarnau, Joost Van de Weijer, Bogdan Raducanu, & Jose Manuel Alvarez. (2016). Invertible conditional gans for image editing. In 30th Annual Conference on Neural Information Processing Systems Worshops.
Abstract: Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes.
Additionally, we evaluate the design of cGANs. The combination of an encoder
with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real
images with deterministic complex modifications.
|
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, Andrew Bagdanov, & Maria Vanrell. (2011). Portmanteau Vocabularies for Multi-Cue Image Representation. In 25th Annual Conference on Neural Information Processing Systems.
Abstract: We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation
|
|
|
Abel Gonzalez-Garcia, Joost Van de Weijer, & Yoshua Bengio. (2018). Image-to-image translation for cross-domain disentanglement. In 32nd Annual Conference on Neural Information Processing Systems.
|
|
|
Chenshen Wu, Luis Herranz, Xialei Liu, Joost Van de Weijer, & Bogdan Raducanu. (2018). Memory Replay GANs: Learning to Generate New Categories without Forgetting. In 32nd Annual Conference on Neural Information Processing Systems (pp. 5966–5976).
Abstract: Previous works on sequential learning address the problem of forgetting in discriminative models. In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (ie forgetting). Addressing this problem, we propose Memory Replay GANs (MeRGANs), a conditional GAN framework that integrates a memory replay generator. We study two methods to prevent forgetting by leveraging these replays, namely joint training with replay and replay alignment. Qualitative and quantitative experimental results in MNIST, SVHN and LSUN datasets show that our memory replay approach can generate competitive images while significantly mitigating the forgetting of previous categories.
|
|
|
Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, & Shangling Jui. (2021). Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation. In Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021).
Abstract: Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data. Our method is based on the observation that target data, which might no longer align with the source domain classifier, still forms clear clusters. We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity. We observe that higher affinity should be assigned to reciprocal neighbors, and propose a self regularization loss to decrease the negative impact of noisy neighbors. Furthermore, to aggregate information with more context, we consider expanded neighborhoods with small affinity values. In the experimental results we verify that the inherent structure of the target features is an important source of information for domain adaptation. We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art performance on several 2D image and 3D point cloud recognition datasets. Code is available in https://github.com/Albert0147/SFDA_neighbors.
|
|
|
Riccardo Del Chiaro, Bartlomiej Twardowski, Andrew Bagdanov, & Joost Van de Weijer. (2020). Recurrent attention to transient tasks for continual image captioning. In 34th Conference on Neural Information Processing Systems.
Abstract: Research on continual learning has led to a variety of approaches to mitigating catastrophic forgetting in feed-forward classification networks. Until now surprisingly little attention has been focused on continual learning of recurrent models applied to problems like image captioning. In this paper we take a systematic look at continual learning of LSTM-based models for image captioning. We propose an attention-based approach that explicitly accommodates the transient nature of vocabularies in continual image captioning tasks -- i.e. that task vocabularies are not disjoint. We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight egularization and knowledge distillation to recurrent continual learning problems. We apply our approaches to incremental image captioning problem on two new continual learning benchmarks we define using the MS-COCO and Flickr30 datasets. Our results demonstrate that RATT is able to sequentially learn five captioning tasks while incurring no forgetting of previously learned ones.
|
|
|
Yaxing Wang, Lu Yu, & Joost Van de Weijer. (2020). DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs. In 34th Conference on Neural Information Processing Systems.
Abstract: Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the shallow layers and (b) semantic information extracted from the deep layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs. Specifically, we leverage the discriminator of a pre-trained GANs (i.e. BigGAN or StyleGAN) to initialize both the encoder and the discriminator and the pre-trained generator to initialize the generator of our model. Applying knowledge transfer leads to an alignment problem between the encoder and generator. We introduce an adaptor network to address this. On many-class image-to-image translation on three datasets (Animal faces, Birds, and Foods) we decrease mFID by at least 35% when compared to the state-of-the-art. Furthermore, we qualitatively and quantitatively demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets. Finally, we are the first to perform I2I translations for domains with over 100 classes.
|
|
|
Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, & Joost Van de Weijer. (2022). Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation. In 36th Conference on Neural Information Processing Systems.
Abstract: We propose a simple but effective source-free domain adaptation (SFDA) method.
Treating SFDA as an unsupervised clustering problem and following the intuition
that local neighbors in feature space should have more similar predictions than
other features, we propose to optimize an objective of prediction consistency. This
objective encourages local neighborhood features in feature space to have similar
predictions while features farther away in feature space have dissimilar predictions, leading to efficient feature clustering and cluster assignment simultaneously. For efficient training, we seek to optimize an upper-bound of the objective resulting in two simple terms. Furthermore, we relate popular existing methods in domain adaptation, source-free domain adaptation and contrastive learning via the perspective of discriminability and diversity. The experimental results prove the superiority of our method, and our method can be adopted as a simple but strong baseline for future research in SFDA. Our method can be also adapted to source-free open-set and partial-set DA which further shows the generalization ability of our method.
|
|
|
Dipam Goswami, Yuyang Liu, Bartlomiej Twardowski, & Joost Van de Weijer. (2023). FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning. In 37th Annual Conference on Neural Information Processing Systems.
|
|
|
Kai Wang, Fei Yang, Shiqi Yang, Muhammad Atif Butt, & Joost Van de Weijer. (2023). Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing. In 37th Annual Conference on Neural Information Processing Systems.
|
|
|
ChuanMing Fang, Kai Wang, & Joost Van de Weijer. (2023). IterInv: Iterative Inversion for Pixel-Level T2I Models. In 37th Annual Conference on Neural Information Processing Systems.
Abstract: Large-scale text-to-image diffusion models have been a ground-breaking development in generating convincing images following an input text prompt. The goal of image editing research is to give users control over the generated images by modifying the text prompt. Current image editing techniques are relying on DDIM inversion as a common practice based on the Latent Diffusion Models (LDM). However, the large pretrained T2I models working on the latent space as LDM suffer from losing details due to the first compression stage with an autoencoder mechanism. Instead, another mainstream T2I pipeline working on the pixel level, such as Imagen and DeepFloyd-IF, avoids this problem. They are commonly composed of several stages, normally with a text-to-image stage followed by several super-resolution stages. In this case, the DDIM inversion is unable to find the initial noise to generate the original image given that the super-resolution diffusion models are not compatible with the DDIM technique. According to our experimental findings, iteratively concatenating the noisy image as the condition is the root of this problem. Based on this observation, we develop an iterative inversion (IterInv) technique for this stream of T2I models and verify IterInv with the open-source DeepFloyd-IF model. By combining our method IterInv with a popular image editing method, we prove the application prospects of IterInv. The code will be released at \url{this https URL}.
|
|
|
David Berga, & Xavier Otazu. (2020). Computations of top-down attention by modulating V1 dynamics. In Computational and Mathematical Models in Vision.
|
|
|
Laura Lopez-Fuentes, Joost Van de Weijer, Marc Bolaños, & Harald Skinnemoen. (2017). Multi-modal Deep Learning Approach for Flood Detection. In MediaEval Benchmarking Initiative for Multimedia Evaluation.
Abstract: In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the
method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task.
|
|
|
Hassan Ahmed Sial, Ramon Baldrich, Maria Vanrell, & Dimitris Samaras. (2020). Light Direction and Color Estimation from Single Image with Deep Regression. In London Imaging Conference.
Abstract: We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes.
|
|
|
Aleksandr Setkov, Fabio Martinez Carillo, Michele Gouiffes, Christian Jacquemin, Maria Vanrell, & Ramon Baldrich. (2015). DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition. In Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II (Vol. 9475, pp. 463–473). LNCS. Springer International Publishing.
Abstract: Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection.
Keywords: Projector-camera systems; Feature descriptors; Object recognition
|
|
|
Antonio Lopez, J. Hilgenstock, A. Busse, Ramon Baldrich, Felipe Lumbreras, & Joan Serrat. (2008). Temporal Coherence Analysis for Intelligent Headlight Control.
Keywords: Intelligent Headlights
|
|
|
Robert Benavente, Gemma Sanchez, Ramon Baldrich, Maria Vanrell, & Josep Llados. (2000). Normalized colour segmentation for human appearance description. In 15 th International Conference on Pattern Recognition (Vol. 3, pp. 637–641).
|
|