Pseudo-Labeling for Class Incremental Learning

Abstract

Class Incremental Learning (CIL) consists in training a model iteratively with limited amount of data from few classes that will never be seen again, resulting in catastrophic forgetting and lack of diversity. In this paper, we address these phenomena by assuming that, during incremental learning, additional unlabeled data are continually available, and propose a Pseudo-Labeling approach for class incremental learning (PLCiL) that makes use of a new adapted loss. We demonstrate that our method achieves better performance than supervised or other semi-supervised methods on standard class incremental benchmarks (CIFAR-100 and ImageNet-100) even when a self-supervised pre-training step using a large set of data is used as initialization. We also illustrate the advantages of our method in a more complex context with fewer labels.

Publication
BMVC 2021: The British Machine Vision Conference