Balancing Learning Plasticity and Memory Stability: A parameter space strategy for class-incremental learning

Neural Netw. 2025 Jun 20:190:107755. doi: 10.1016/j.neunet.2025.107755. Online ahead of print.

Abstract

The objective of Continual Learning(CL) is to maintain both the learning plasticity and memory stability of a model, allowing it to continuously acquire new knowledge over time while ensuring robust retention of previously learned knowledge. However, existing CL methods primarily focus on memory stability, preventing catastrophic forgetting (CF) of knowledge from earlier tasks, while overlooking efficient learning of new tasks. In this paper,we propose a parameter-space decomposition method to Balanced Learning Plasticity and Memory Stability (BLPMS), dividing the model into sub-networks for each task by parameter isolation, further decomposing each sub-network into task-general and task-specific parameter spaces. During training, BLPMS balancing the update rate between these parameter spaces to promote class-incremental learning. Additionally, at the inference stage,we adopt Mixture of Experts (MoE) module based on Prototypical Network to dynamically select the appropriate parameter space. Experimental results demonstrate that BLPMS outperforms existing methods across multiple benchmark datasets, achieving state-of-the-art performance.

Keywords: Class-incremental learning; Parameter isolation; Regularization; Replay techniques.