Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, & Joost Van de Weijer. (2022). Local Prediction Aggregation: A Frustratingly Easy Source-free Domain Adaptation Method.
Abstract: We propose a simple but effective source-free domain adaptation (SFDA) method. Treating SFDA as an unsupervised clustering problem and following the intuition that local neighbors in feature space should have more similar predictions than other features, we propose to optimize an objective of prediction consistency. This objective encourages local neighborhood features in feature space to have similar predictions while features farther away in feature space have dissimilar predictions, leading to efficient feature clustering and cluster assignment simultaneously. For efficient training, we seek to optimize an upper-bound of the objective resulting in two simple terms. Furthermore, we relate popular existing methods in domain adaptation, source-free domain adaptation and contrastive learning via the perspective of discriminability and diversity. The experimental results prove the superiority of our method, and our method can be adopted as a simple but strong baseline for future research in SFDA. Our method can be also adapted to source-free open-set and partial-set DA which further shows the generalization ability of our method. Code is available in this https URL.
|
|
Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, & Joost Van de Weijer. (2022). One Ring to Bring Them All: Towards Open-Set Recognition under Domain Shift.
Abstract: In this paper, we investigate model adaptation under domain and category shift, where the final goal is to achieve
(SF-UNDA), which addresses the situation where there exist both domain and category shifts between source and target domains. Under the SF-UNDA setting, the model cannot access source data anymore during target adaptation, which aims to address data privacy concerns. We propose a novel training scheme to learn a (
+1)-way classifier to predict the
source classes and the unknown class, where samples of only known source categories are available for training. Furthermore, for target adaptation, we simply adopt a weighted entropy minimization to adapt the source pretrained model to the unlabeled target domain without source data. In experiments, we show:
After source training, the resulting source model can get excellent performance for
;
After target adaptation, our method surpasses current UNDA approaches which demand source data during adaptation. The versatility to several different tasks strongly proves the efficacy and generalization ability of our method.
When augmented with a closed-set domain adaptation approach during target adaptation, our source-free method further outperforms the current state-of-the-art UNDA method by 2.5%, 7.2% and 13% on Office-31, Office-Home and VisDA respectively.
|
|
Marco Cotogni, Fei Yang, Claudio Cusano, Andrew Bagdanov, & Joost Van de Weijer. (2022). Gated Class-Attention with Cascaded Feature Drift Compensation for Exemplar-free Continual Learning of Vision Transformers.
Abstract: We propose a new method for exemplar-free class incremental training of ViTs. The main challenge of exemplar-free continual learning is maintaining plasticity of the learner without causing catastrophic forgetting of previously learned tasks. This is often achieved via exemplar replay which can help recalibrate previous task classifiers to the feature drift which occurs when learning new tasks. Exemplar replay, however, comes at the cost of retaining samples from previous tasks which for many applications may not be possible. To address the problem of continual ViT training, we first propose gated class-attention to minimize the drift in the final ViT transformer block. This mask-based gating is applied to class-attention mechanism of the last transformer block and strongly regulates the weights crucial for previous tasks. Importantly, gated class-attention does not require the task-ID during inference, which distinguishes it from other parameter isolation methods. Secondly, we propose a new method of feature drift compensation that accommodates feature drift in the backbone when learning new tasks. The combination of gated class-attention and cascaded feature drift compensation allows for plasticity towards new tasks while limiting forgetting of previous ones. Extensive experiments performed on CIFAR-100, Tiny-ImageNet and ImageNet100 demonstrate that our exemplar-free method obtains competitive results when compared to rehearsal based ViT methods.
Keywords: Marco Cotogni, Fei Yang, Claudio Cusano, Andrew D. Bagdanov, Joost van de Weijer
|
|
Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu, & Joost Van de Weijer. (2023). Projected Latent Distillation for Data-Agnostic Consolidation in Distributed Continual Learning.
Abstract: Distributed learning on the edge often comprises self-centered devices (SCD) which learn local tasks independently and are unwilling to contribute to the performance of other SDCs. How do we achieve forward transfer at zero cost for the single SCDs? We formalize this problem as a Distributed Continual Learning scenario, where SCD adapt to local tasks and a CL model consolidates the knowledge from the resulting stream of models without looking at the SCD's private data. Unfortunately, current CL methods are not directly applicable to this scenario. We propose Data-Agnostic Consolidation (DAC), a novel double knowledge distillation method that consolidates the stream of SC models without using the original data. DAC performs distillation in the latent space via a novel Projected Latent Distillation loss. Experimental results show that DAC enables forward transfer between SCDs and reaches state-of-the-art accuracy on Split CIFAR100, CORe50 and Split TinyImageNet, both in reharsal-free and distributed CL scenarios. Somewhat surprisingly, even a single out-of-distribution image is sufficient as the only source of data during consolidation.
|
|
Marco Cotogni, Fei Yang, Claudio Cusano, Andrew Bagdanov, & Joost Van de Weijer. (2023). Exemplar-free Continual Learning of Vision Transformers via Gated Class-Attention and Cascaded Feature Drift Compensation.
Abstract: We propose a new method for exemplar-free class incremental training of ViTs. The main challenge of exemplar-free continual learning is maintaining plasticity of the learner without causing catastrophic forgetting of previously learned tasks. This is often achieved via exemplar replay which can help recalibrate previous task classifiers to the feature drift which occurs when learning new tasks. Exemplar replay, however, comes at the cost of retaining samples from previous tasks which for many applications may not be possible. To address the problem of continual ViT training, we first propose gated class-attention to minimize the drift in the final ViT transformer block. This mask-based gating is applied to class-attention mechanism of the last transformer block and strongly regulates the weights crucial for previous tasks. Importantly, gated class-attention does not require the task-ID during inference, which distinguishes it from other parameter isolation methods. Secondly, we propose a new method of feature drift compensation that accommodates feature drift in the backbone when learning new tasks. The combination of gated class-attention and cascaded feature drift compensation allows for plasticity towards new tasks while limiting forgetting of previous ones. Extensive experiments performed on CIFAR-100, Tiny-ImageNet and ImageNet100 demonstrate that our exemplar-free method obtains competitive results when compared to rehearsal based ViT methods.
|
|
Justine Giroux, Mohammad Reza Karimi Dastjerdi, Yannick Hold-Geoffroy, Javier Vazquez, & Jean François Lalonde. (2024). Towards a Perceptual Evaluation Framework for Lighting Estimation. In Arxiv.
Abstract: rogress in lighting estimation is tracked by computing existing image quality assessment (IQA) metrics on images from standard datasets. While this may appear to be a reasonable approach, we demonstrate that doing so does not correlate to human preference when the estimated lighting is used to relight a virtual scene into a real photograph. To study this, we design a controlled psychophysical experiment where human observers must choose their preference amongst rendered scenes lit using a set of lighting estimation algorithms selected from the recent literature, and use it to analyse how these algorithms perform according to human perception. Then, we demonstrate that none of the most popular IQA metrics from the literature, taken individually, correctly represent human perception. Finally, we show that by learning a combination of existing IQA metrics, we can more accurately represent human preference. This provides a new perceptual framework to help evaluate future lighting estimation algorithms.
|
|
O. Fors, A. Richichi, Xavier Otazu, & J. Nuñez. (2008). A new wavelet-based approach for the automated treatment of large sets of lunar occultation data. Astronomy and Astrohysics, 297–304.
|
|
A. Richichi, O. Fors, M.T. Merino, Xavier Otazu, J. Nuñez, A. Prades, et al. (2006). The Calar Alto lunar occultation program: update and new results. Astronomy and Astrophysics (Section ’Stellar structure and evolution’), 445:1081–1088.
|
|
Xavier Otazu, Olivier Penacchio, & Xim Cerda-Company. (2015). An excitatory-inhibitory firing rate model accounts for brightness induction, colour induction and visual discomfort. In Barcelona Computational, Cognitive and Systems Neuroscience.
|
|
C. Alejandro Parraga. (2015). Perceptual Psychophysics. In G.Cristobal, M.Keil, & L.Perrinet (Eds.), Biologically-Inspired Computer Vision: Fundamentals and Applications.
|
|
Robert Benavente, & Maria Vanrell. (2004). Fuzzy Colour Naming Based on Sigmoid Membership Functions..
|
|
Xavier Otazu, & Maria Vanrell. (2004). Building Perceived Colour Images..
|
|
Francesc Tous, Maria Vanrell, & Ramon Baldrich. (2004). Exploring Colour Constancy Solutions..
|
|
Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew Bagdanov, et al. (2020). Generative Feature Replay for Class-Incremental Learning. In CLVISION – Workshop on Continual Learning in Computer Vision.
Abstract: Humans are capable of learning new tasks without forgetting previous ones, while neural networks fail due to catastrophic forgetting between new and previously-learned tasks. We consider a class-incremental setting which means that the task-ID is unknown at inference time. The imbalance between old and new classes typically results in a bias of the network towards the newest ones. This imbalance problem can either be addressed by storing exemplars from previous tasks, or by using image replay methods. However, the latter can only be applied to toy datasets since image generation for complex datasets is a hard problem.
We propose a solution to the imbalance problem based on generative feature replay which does not require any exemplars. To do this, we split the network into two parts: a feature extractor and a classifier. To prevent forgetting, we combine generative feature replay in the classifier with feature distillation in the feature extractor. Through feature generation, our method reduces the complexity of generative replay and prevents the imbalance problem. Our approach is computationally efficient and scalable to large datasets. Experiments confirm that our approach achieves state-of-the-art results on CIFAR-100 and ImageNet, while requiring only a fraction of the storage needed for exemplar-based continual learning
|
|
Joost Van de Weijer, Robert Benavente, Maria Vanrell, Cordelia Schmid, Ramon Baldrich, Jacob Verbeek, et al. (2012). Color Naming. In Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, & Jan-Mark Geusebroek (Eds.), Color in Computer Vision: Fundamentals and Applications (pp. 287–317). John Wiley & Sons, Ltd.
|
|
Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, & J.M. Geusebroek. (2012). Color in Computer Vision: Fundamentals and Applications. The Wiley-IS&T Series in Imaging Science and Technology.
|
|
Robert Benavente, Maria Vanrell, & Ramon Baldrich. (2006). A data set for fuzzy colour naming. Color Research & Application, 31(1):48–56.
|
|
Robert Benavente, Maria Vanrell, & Ramon Baldrich. (2004). Estimation of Fuzzy Sets for Computational Colour Categorization. Color Research and Application, 29(5):342–353 (IF: 0.739).
|
|