摘要
Bayesian-based methods have emerged as an effective approach in continual learning(CL) to solve catastrophic forgetting. One prominent example is Variational Continual Learning(VCL), which demonstrates remarkable performance in task-incremental learning(task-IL).However, class-incremental learning(class-IL) is still challenging for VCL, and the reasons behind this limitation remain unclear. Relying on the sophisticated neural mechanisms, particularly the mechanism of memory consolidation during sleep, the human brain possesses inherent advantages for both task-IL and class-IL scenarios, which provides insight for a braininspired VCL. To identify the reasons for the inadequacy of VCL in class-IL, we first conduct a comprehensive theoretical analysis of VCL. On this basis, we propose a novel Bayesian framework named as Learning within Sleeping(Lw S) by leveraging the memory consolidation.By simulating the distribution integration and generalization observed during memory consolidation in sleep, Lw S achieves the idea of prior knowledge guiding posterior knowledge learning as in VCL. In addition, with emulating the process of memory reactivation of the brain,Lw S imposes a constraint on feature invariance to mitigate forgetting learned knowledge. Experimental results demonstrate that Lw S outperforms both Bayesian and non-Bayesian methods in task-IL and class-IL scenarios, which further indicates the effectiveness of incorporating brain mechanisms on designing novel approaches for CL.
基金
supported by the National Natural Science Foundation of China under Grant 62236001 and Grant 62325601