期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Squeezing More Past Knowledge for Online Class-Incremental Continual Learning 被引量:1
1
作者 Da Yu Mingyi Zhang +4 位作者 Mantian Li Fusheng Zha Junge Zhang Lining Sun Kaiqi Huang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第3期722-736,共15页
Continual learning(CL)studies the problem of learning to accumulate knowledge over time from a stream of data.A crucial challenge is that neural networks suffer from performance degradation on previously seen data,kno... Continual learning(CL)studies the problem of learning to accumulate knowledge over time from a stream of data.A crucial challenge is that neural networks suffer from performance degradation on previously seen data,known as catastrophic forgetting,due to allowing parameter sharing.In this work,we consider a more practical online class-incremental CL setting,where the model learns new samples in an online manner and may continuously experience new classes.Moreover,prior knowledge is unavailable during training and evaluation.Existing works usually explore sample usages from a single dimension,which ignores a lot of valuable supervisory information.To better tackle the setting,we propose a novel replay-based CL method,which leverages multi-level representations produced by the intermediate process of training samples for replay and strengthens supervision to consolidate previous knowledge.Specifically,besides the previous raw samples,we store the corresponding logits and features in the memory.Furthermore,to imitate the prediction of the past model,we construct extra constraints by leveraging multi-level information stored in the memory.With the same number of samples for replay,our method can use more past knowledge to prevent interference.We conduct extensive evaluations on several popular CL datasets,and experiments show that our method consistently outperforms state-of-the-art methods with various sizes of episodic memory.We further provide a detailed analysis of these results and demonstrate that our method is more viable in practical scenarios. 展开更多
关键词 Catastrophic forgetting class-incremental learning continual learning(CL) experience replay
在线阅读 下载PDF
Federated Class-Incremental Learning with New-Class Augmented Self-Distillation
2
作者 Zhi-Yuan Wu Tian-Liu He +4 位作者 Sheng Sun Yu-Wei Wang Min Liu Bo Gao Xue-Feng Jiang 《Journal of Computer Science & Technology》 2025年第5期1427-1437,共11页
Federated learning(FL)enables collaborative model training among participants while guaranteeing the privacy of raw data.Mainstream FL methodologies overlook the dynamic nature of real-world data,particularly its tend... Federated learning(FL)enables collaborative model training among participants while guaranteeing the privacy of raw data.Mainstream FL methodologies overlook the dynamic nature of real-world data,particularly its tendency to grow in volume and diversify in classes over time.This oversight results in FL methods suffering from catastrophic forgetting,where the trained models inadvertently discard previously learned information upon assimilating new data.In response to this challenge,we propose a novel federated class-incremental learning(FCIL)method,named Federated Classincremental Learning with New-Class Augmented Self-Distillation(FedCLASS).The core of FedCLASS is to enrich the class scores of historical models with new class scores predicted by current models and utilize the combined knowledge for self-distillation,enabling a more sufficient and precise knowledge transfer from historical models to current models.Theoretical analyses demonstrate that FedCLASS stands on reliable foundations,considering the scores of old classes predicted by historical models as conditional probabilities in the absence of new classes,and the scores of new classes predicted by current models as the conditional probabilities of class scores derived from historical models.Empirical experiments demonstrate the superiority of FedCLASS over four baseline algorithms in reducing average forgetting rate and boosting global accuracy. 展开更多
关键词 federated learning class-incremental learning knowledge distillation
原文传递
Decoupled Two-Phase Framework for Class-Incremental Few-Shot Named Entity Recognition 被引量:1
3
作者 Yifan Chen Zhen Huang +4 位作者 Minghao Hu Dongsheng Li Changjian Wang Feng Liu Xicheng Lu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第5期976-987,共12页
Class-Incremental Few-Shot Named Entity Recognition(CIFNER)aims to identify entity categories that have appeared with only a few newly added(novel)class examples.However,existing class-incremental methods typically in... Class-Incremental Few-Shot Named Entity Recognition(CIFNER)aims to identify entity categories that have appeared with only a few newly added(novel)class examples.However,existing class-incremental methods typically introduce new parameters to adapt to new classes and treat all information equally,resulting in poor generalization.Meanwhile,few-shot methods necessitate samples for all observed classes,making them difficult to transfer into a class-incremental setting.Thus,a decoupled two-phase framework method for the CIFNER task is proposed to address the above issues.The whole task is converted to two separate tasks named Entity Span Detection(ESD)and Entity Class Discrimination(ECD)that leverage parameter-cloning and label-fusion to learn different levels of knowledge separately,such as class-generic knowledge and class-specific knowledge.Moreover,different variants,such as the Conditional Random Field-based(CRF-based),word-pair-based methods in ESD module,and add-based,Natural Language Inference-based(NLI-based)and prompt-based methods in ECD module,are investigated to demonstrate the generalizability of the decoupled framework.Extensive experiments on the three Named Entity Recognition(NER)datasets reveal that our method achieves the state-of-the-art performance in the CIFNER setting. 展开更多
关键词 named entity recognition deep learning class-incremental learning few-shot learning
原文传递
A memory-friendly class-incremental learning method for hand gesture recognition using HD-sEMG
4
作者 Yu Bai Le Wu +1 位作者 Shengcai Duan Xun Chen 《Medicine in Novel Technology and Devices》 2024年第2期124-132,共9页
Hand gesture recognition(HGR)plays a vital role in human-computer interaction.The integration of high-density surface electromyography(HD-sEMG)and deep neural networks(DNNs)has significantly improved the robustness an... Hand gesture recognition(HGR)plays a vital role in human-computer interaction.The integration of high-density surface electromyography(HD-sEMG)and deep neural networks(DNNs)has significantly improved the robustness and accuracy of HGR systems.These methods are typically effective for a fixed set of trained gestures.However,the need for new gesture classes over time poses a challenge.Introducing new classes to DNNs can lead to a substantial decrease in accuracy for previously learned tasks,a phenomenon known as“catastrophic forgetting,”especially when the training data for earlier tasks is not retained and retrained.This issue is exacerbated in embedded devices with limited storage,which struggle to store the large-scale data of HD-sEMG.Classincremental learning(CIL)is an effective method to reduce catastrophic forgetting.However,existing CIL methods for HGR rarely focus on reducing memory load.To address this,we propose a memory-friendly CIL method for HGR using HD-sEMG.Our approach includes a lightweight convolutional neural network,named SeparaNet,for feature representation learning,coupled with a nearest-mean-of-exemplars classifier for classifi-cation.We introduce a priority exemplar selection algorithm inspired by the herding effect to maintain a manageable set of exemplars during training.Furthermore,a task-equal-weight exemplar sampling strategy is proposed to effectively reduce memory load while preserving high recognition performance.Experimental results on two datasets demonstrate that our method significantly reduces the number of retained exemplars to only a quarter of that required by other CIL methods,accounting for less than 5%of the total samples,while still achieving comparable average accuracy. 展开更多
关键词 Myoelectric pattern recognition Memory-friendly class-incremental learning
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部