Continual learning(CL)has emerged as a crucial paradigm for learning from sequential data while retaining previous knowledge.Continual graph learning(CGL),characterized by dynamically evolving graphs from streaming da...Continual learning(CL)has emerged as a crucial paradigm for learning from sequential data while retaining previous knowledge.Continual graph learning(CGL),characterized by dynamically evolving graphs from streaming data,presents distinct challenges that demand efficient algorithms to prevent catastrophic forgetting.The first challenge stems from the interdependencies between different graph data,in which previous graphs infuence new data distributions.The second challenge is handling large graphs in an efficient manner.To address these challenges,we propose an eficient continual graph learner(E-CGL)in this paper.We address the interdependence issue by demonstrating the effectiveness of replay strategies and introducing a combined sampling approach that considers both node importance and diversity.To improve efficiency,E-CGL leverages a simple yet effective multilayer perceptron(MLP)model that shares weights with a graph neural network(GNN)during training,thereby accelerating computation by circumventing the expensive message-passing process.Our method achieves state-ofthe-art results on four CGL datasets under two settings,while significantly lowering the catastrophic forgetting value to an average of-1.1%.Additionally,E-CGL achieves the training and inference speedup by an average of 15.83x and 4.89x,respectively,across four datasets.These results indicate that E-CGL not only effectively manages correlations between different graph data during continual training but also enhances efficiency in large-scale CGL.展开更多
基金Project supported by the National Natural Science Foundation of China(No.62272411)the Key R&D Projects in Zhejiang Province(Nos.2024C01106 and 2025C01030)the Zhejiang Natural Science Foundation(No.LRG25F020001)。
文摘Continual learning(CL)has emerged as a crucial paradigm for learning from sequential data while retaining previous knowledge.Continual graph learning(CGL),characterized by dynamically evolving graphs from streaming data,presents distinct challenges that demand efficient algorithms to prevent catastrophic forgetting.The first challenge stems from the interdependencies between different graph data,in which previous graphs infuence new data distributions.The second challenge is handling large graphs in an efficient manner.To address these challenges,we propose an eficient continual graph learner(E-CGL)in this paper.We address the interdependence issue by demonstrating the effectiveness of replay strategies and introducing a combined sampling approach that considers both node importance and diversity.To improve efficiency,E-CGL leverages a simple yet effective multilayer perceptron(MLP)model that shares weights with a graph neural network(GNN)during training,thereby accelerating computation by circumventing the expensive message-passing process.Our method achieves state-ofthe-art results on four CGL datasets under two settings,while significantly lowering the catastrophic forgetting value to an average of-1.1%.Additionally,E-CGL achieves the training and inference speedup by an average of 15.83x and 4.89x,respectively,across four datasets.These results indicate that E-CGL not only effectively manages correlations between different graph data during continual training but also enhances efficiency in large-scale CGL.