Online Boundary-Free Continual Learning with Anytime Inference
- Abstract
- Typical continual learning setup assumes that the dataset is split into multiple
discrete tasks. We argue that it is less realistic as the streamed data would have no
notion of task boundary in real-world data. Here, we take a step forward to investigate
more realistic online continual learning – learning continuously changing data
distribution without explicit task boundary, which we call boundary-free setup. Due
to the lack of boundary, it is not obvious when and what information in the past to
be preserved for a better remedy for the stability-plasticity dilemma. To this end, we
propose a scheduled transfer of previously learned knowledge. In addition, we further
propose a data-driven balancing between the knowledge in the past and the present
in learning objective. Moreover, since it is not straightforward to use the previously
proposed forgetting measure without task boundaries, we further propose a novel forgetting
and knowledge gain measure based on information theory. We empirically
evaluate our method on a Gaussian data stream and its periodic extension, which is
frequently observed in real-life data, as well as the conventional disjoint task-split. Our
method outperforms prior arts by large margins in various setups, using four benchmark
datasets in continual learning literature – CIFAR-10, CIFAR-100, TinyImageNet
and ImageNet.
- Author(s)
- Hyunseo Koh
- Issued Date
- 2023
- Type
- Thesis
- URI
- https://scholar.gist.ac.kr/handle/local/19539
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.