TY - JOUR AU - Kai Wang AU - Joost Van de Weijer AU - Luis Herranz PY - 2021// TI - ACAE-REMIND for online continual learning with compressed feature replay T2 - PRL JO - Pattern Recognition Letters SP - 122 EP - 129 VL - 150 KW - online continual learning KW - autoencoders KW - vector quantization N2 - Online continual learning aims to learn from a non-IID stream of data from a number of different tasks, where the learner is only allowed to consider data once. Methods are typically allowed to use a limited buffer to store some of the images in the stream. Recently, it was found that feature replay, where an intermediate layer representation of the image is stored (or generated) leads to superior results than image replay, while requiring less memory. Quantized exemplars can further reduce the memory usage. However, a drawback of these methods is that they use a fixed (or very intransigent) backbone network. This significantly limits the learning of representations that can discriminate between all tasks. To address this problem, we propose an auxiliary classifier auto-encoder (ACAE) module for feature replay at intermediate layers with high compression rates. The reduced memory footprint per image allows us to save more exemplars for replay. In our experiments, we conduct task-agnostic evaluation under online continual learning setting and get state-of-the-art performance on ImageNet-Subset, CIFAR100 and CIFAR10 dataset. UR - https://doi.org/10.1016/j.patrec.2021.06.025 L1 - http://refbase.cvc.uab.es/files/WWH2021.pdf N1 - LAMP; 600.147; 601.379; 600.120; 600.141 ID - Kai Wang2021 ER -