Modern intelligent systems,such as autonomous vehicles and face recognition,must continuously adapt to new scenarios while preserving their ability to handle previously encountered situations.However,when neural netwo...Modern intelligent systems,such as autonomous vehicles and face recognition,must continuously adapt to new scenarios while preserving their ability to handle previously encountered situations.However,when neural networks learn new classes sequentially,they suffer from catastrophic forgetting—the tendency to lose knowledge of earlier classes.This challenge,which lies at the core of class-incremental learning,severely limits the deployment of continual learning systems in real-world applications with streaming data.Existing approaches,including rehearsalbased methods and knowledge distillation techniques,have attempted to address this issue but often struggle to effectively preserve decision boundaries and discriminative features under limited memory constraints.To overcome these limitations,we propose a support vector-guided framework for class-incremental learning.The framework integrates an enhanced feature extractor with a Support Vector Machine classifier,which generates boundary-critical support vectors to guide both replay and distillation.Building on this architecture,we design a joint feature retention strategy that combines boundary proximity with feature diversity,and a Support Vector Distillation Loss that enforces dual alignment in decision and semantic spaces.In addition,triple attention modules are incorporated into the feature extractor to enhance representation power.Extensive experiments on CIFAR-100 and Tiny-ImageNet demonstrate effective improvements.On CIFAR-100 and Tiny-ImageNet with 5 tasks,our method achieves 71.68%and 58.61%average accuracy,outperforming strong baselines by 3.34%and 2.05%.These advantages are consistently observed across different task splits,highlighting the robustness and generalization of the proposed approach.Beyond benchmark evaluations,the framework also shows potential in few-shot and resource-constrained applications such as edge computing and mobile robotics.展开更多
In wireless sensor networks,ensuring communication security via specific emitter identification(SEI)is crucial.However,existing SEI methods are limited to closed-set scenarios and lack the ability to detect unknown de...In wireless sensor networks,ensuring communication security via specific emitter identification(SEI)is crucial.However,existing SEI methods are limited to closed-set scenarios and lack the ability to detect unknown devices and perform classincremental training.This study proposes a class-incremental open-set SEI approach.The open-set SEI model calculates radiofrequency fingerprints(RFFs)prototypes for known signals and employs a self-attention mechanism to enhance their discriminability.Detection thresholds are set through Gaussian fitting for each class.For class-incremental learning,the algorithm freezes the parameters of the previously trained model to initialize the new model.It designs specific losses:the RFFs extraction distribution difference loss and the prototype transformation distribution difference loss,which force the new model to retain old knowledge while learning new knowledge.The training loss enables learning of new class RFFs.Experimental results demonstrate that the open-set SEI model achieves state-of-theart performance and strong noise robustness.Moreover,the class-incremental learning algorithm effectively enables the model to retain old device RFFs knowledge,acquire new device RFFs knowledge,and detect unknown devices simultaneously.展开更多
Continual learning(CL)studies the problem of learning to accumulate knowledge over time from a stream of data.A crucial challenge is that neural networks suffer from performance degradation on previously seen data,kno...Continual learning(CL)studies the problem of learning to accumulate knowledge over time from a stream of data.A crucial challenge is that neural networks suffer from performance degradation on previously seen data,known as catastrophic forgetting,due to allowing parameter sharing.In this work,we consider a more practical online class-incremental CL setting,where the model learns new samples in an online manner and may continuously experience new classes.Moreover,prior knowledge is unavailable during training and evaluation.Existing works usually explore sample usages from a single dimension,which ignores a lot of valuable supervisory information.To better tackle the setting,we propose a novel replay-based CL method,which leverages multi-level representations produced by the intermediate process of training samples for replay and strengthens supervision to consolidate previous knowledge.Specifically,besides the previous raw samples,we store the corresponding logits and features in the memory.Furthermore,to imitate the prediction of the past model,we construct extra constraints by leveraging multi-level information stored in the memory.With the same number of samples for replay,our method can use more past knowledge to prevent interference.We conduct extensive evaluations on several popular CL datasets,and experiments show that our method consistently outperforms state-of-the-art methods with various sizes of episodic memory.We further provide a detailed analysis of these results and demonstrate that our method is more viable in practical scenarios.展开更多
Federated learning(FL)enables collaborative model training among participants while guaranteeing the privacy of raw data.Mainstream FL methodologies overlook the dynamic nature of real-world data,particularly its tend...Federated learning(FL)enables collaborative model training among participants while guaranteeing the privacy of raw data.Mainstream FL methodologies overlook the dynamic nature of real-world data,particularly its tendency to grow in volume and diversify in classes over time.This oversight results in FL methods suffering from catastrophic forgetting,where the trained models inadvertently discard previously learned information upon assimilating new data.In response to this challenge,we propose a novel federated class-incremental learning(FCIL)method,named Federated Classincremental Learning with New-Class Augmented Self-Distillation(FedCLASS).The core of FedCLASS is to enrich the class scores of historical models with new class scores predicted by current models and utilize the combined knowledge for self-distillation,enabling a more sufficient and precise knowledge transfer from historical models to current models.Theoretical analyses demonstrate that FedCLASS stands on reliable foundations,considering the scores of old classes predicted by historical models as conditional probabilities in the absence of new classes,and the scores of new classes predicted by current models as the conditional probabilities of class scores derived from historical models.Empirical experiments demonstrate the superiority of FedCLASS over four baseline algorithms in reducing average forgetting rate and boosting global accuracy.展开更多
Class-Incremental Few-Shot Named Entity Recognition(CIFNER)aims to identify entity categories that have appeared with only a few newly added(novel)class examples.However,existing class-incremental methods typically in...Class-Incremental Few-Shot Named Entity Recognition(CIFNER)aims to identify entity categories that have appeared with only a few newly added(novel)class examples.However,existing class-incremental methods typically introduce new parameters to adapt to new classes and treat all information equally,resulting in poor generalization.Meanwhile,few-shot methods necessitate samples for all observed classes,making them difficult to transfer into a class-incremental setting.Thus,a decoupled two-phase framework method for the CIFNER task is proposed to address the above issues.The whole task is converted to two separate tasks named Entity Span Detection(ESD)and Entity Class Discrimination(ECD)that leverage parameter-cloning and label-fusion to learn different levels of knowledge separately,such as class-generic knowledge and class-specific knowledge.Moreover,different variants,such as the Conditional Random Field-based(CRF-based),word-pair-based methods in ESD module,and add-based,Natural Language Inference-based(NLI-based)and prompt-based methods in ECD module,are investigated to demonstrate the generalizability of the decoupled framework.Extensive experiments on the three Named Entity Recognition(NER)datasets reveal that our method achieves the state-of-the-art performance in the CIFNER setting.展开更多
Hand gesture recognition(HGR)plays a vital role in human-computer interaction.The integration of high-density surface electromyography(HD-sEMG)and deep neural networks(DNNs)has significantly improved the robustness an...Hand gesture recognition(HGR)plays a vital role in human-computer interaction.The integration of high-density surface electromyography(HD-sEMG)and deep neural networks(DNNs)has significantly improved the robustness and accuracy of HGR systems.These methods are typically effective for a fixed set of trained gestures.However,the need for new gesture classes over time poses a challenge.Introducing new classes to DNNs can lead to a substantial decrease in accuracy for previously learned tasks,a phenomenon known as“catastrophic forgetting,”especially when the training data for earlier tasks is not retained and retrained.This issue is exacerbated in embedded devices with limited storage,which struggle to store the large-scale data of HD-sEMG.Classincremental learning(CIL)is an effective method to reduce catastrophic forgetting.However,existing CIL methods for HGR rarely focus on reducing memory load.To address this,we propose a memory-friendly CIL method for HGR using HD-sEMG.Our approach includes a lightweight convolutional neural network,named SeparaNet,for feature representation learning,coupled with a nearest-mean-of-exemplars classifier for classifi-cation.We introduce a priority exemplar selection algorithm inspired by the herding effect to maintain a manageable set of exemplars during training.Furthermore,a task-equal-weight exemplar sampling strategy is proposed to effectively reduce memory load while preserving high recognition performance.Experimental results on two datasets demonstrate that our method significantly reduces the number of retained exemplars to only a quarter of that required by other CIL methods,accounting for less than 5%of the total samples,while still achieving comparable average accuracy.展开更多
基金supported by the Gansu Provincial Natural Science Foundation(grant number 25JRRA074)the Gansu Provincial Key R&D Science and Technology Program(grant number 24YFGA060)the National Natural Science Foundation of China(grant number 62161019).
文摘Modern intelligent systems,such as autonomous vehicles and face recognition,must continuously adapt to new scenarios while preserving their ability to handle previously encountered situations.However,when neural networks learn new classes sequentially,they suffer from catastrophic forgetting—the tendency to lose knowledge of earlier classes.This challenge,which lies at the core of class-incremental learning,severely limits the deployment of continual learning systems in real-world applications with streaming data.Existing approaches,including rehearsalbased methods and knowledge distillation techniques,have attempted to address this issue but often struggle to effectively preserve decision boundaries and discriminative features under limited memory constraints.To overcome these limitations,we propose a support vector-guided framework for class-incremental learning.The framework integrates an enhanced feature extractor with a Support Vector Machine classifier,which generates boundary-critical support vectors to guide both replay and distillation.Building on this architecture,we design a joint feature retention strategy that combines boundary proximity with feature diversity,and a Support Vector Distillation Loss that enforces dual alignment in decision and semantic spaces.In addition,triple attention modules are incorporated into the feature extractor to enhance representation power.Extensive experiments on CIFAR-100 and Tiny-ImageNet demonstrate effective improvements.On CIFAR-100 and Tiny-ImageNet with 5 tasks,our method achieves 71.68%and 58.61%average accuracy,outperforming strong baselines by 3.34%and 2.05%.These advantages are consistently observed across different task splits,highlighting the robustness and generalization of the proposed approach.Beyond benchmark evaluations,the framework also shows potential in few-shot and resource-constrained applications such as edge computing and mobile robotics.
基金supported by the National Natural Science Foundation of China(62371465)Taishan Scholar Project of Shandong Province(ts201511020)。
文摘In wireless sensor networks,ensuring communication security via specific emitter identification(SEI)is crucial.However,existing SEI methods are limited to closed-set scenarios and lack the ability to detect unknown devices and perform classincremental training.This study proposes a class-incremental open-set SEI approach.The open-set SEI model calculates radiofrequency fingerprints(RFFs)prototypes for known signals and employs a self-attention mechanism to enhance their discriminability.Detection thresholds are set through Gaussian fitting for each class.For class-incremental learning,the algorithm freezes the parameters of the previously trained model to initialize the new model.It designs specific losses:the RFFs extraction distribution difference loss and the prototype transformation distribution difference loss,which force the new model to retain old knowledge while learning new knowledge.The training loss enables learning of new class RFFs.Experimental results demonstrate that the open-set SEI model achieves state-of-theart performance and strong noise robustness.Moreover,the class-incremental learning algorithm effectively enables the model to retain old device RFFs knowledge,acquire new device RFFs knowledge,and detect unknown devices simultaneously.
基金supported in part by the National Natura Science Foundation of China(U2013602,61876181,51521003)the Nationa Key R&D Program of China(2020YFB13134)+2 种基金Shenzhen Science and Technology Research and Development Foundation(JCYJ20190813171009236)Beijing Nova Program of Science and Technology(Z191100001119043)the Youth Innovation Promotion Association,Chinese Academy of Sciences。
文摘Continual learning(CL)studies the problem of learning to accumulate knowledge over time from a stream of data.A crucial challenge is that neural networks suffer from performance degradation on previously seen data,known as catastrophic forgetting,due to allowing parameter sharing.In this work,we consider a more practical online class-incremental CL setting,where the model learns new samples in an online manner and may continuously experience new classes.Moreover,prior knowledge is unavailable during training and evaluation.Existing works usually explore sample usages from a single dimension,which ignores a lot of valuable supervisory information.To better tackle the setting,we propose a novel replay-based CL method,which leverages multi-level representations produced by the intermediate process of training samples for replay and strengthens supervision to consolidate previous knowledge.Specifically,besides the previous raw samples,we store the corresponding logits and features in the memory.Furthermore,to imitate the prediction of the past model,we construct extra constraints by leveraging multi-level information stored in the memory.With the same number of samples for replay,our method can use more past knowledge to prevent interference.We conduct extensive evaluations on several popular CL datasets,and experiments show that our method consistently outperforms state-of-the-art methods with various sizes of episodic memory.We further provide a detailed analysis of these results and demonstrate that our method is more viable in practical scenarios.
基金supported by the National Key Research and Development Program of China under Grant No.2023YFB2703700the National Natural Science Foundation of China under Grant No.62472410.
文摘Federated learning(FL)enables collaborative model training among participants while guaranteeing the privacy of raw data.Mainstream FL methodologies overlook the dynamic nature of real-world data,particularly its tendency to grow in volume and diversify in classes over time.This oversight results in FL methods suffering from catastrophic forgetting,where the trained models inadvertently discard previously learned information upon assimilating new data.In response to this challenge,we propose a novel federated class-incremental learning(FCIL)method,named Federated Classincremental Learning with New-Class Augmented Self-Distillation(FedCLASS).The core of FedCLASS is to enrich the class scores of historical models with new class scores predicted by current models and utilize the combined knowledge for self-distillation,enabling a more sufficient and precise knowledge transfer from historical models to current models.Theoretical analyses demonstrate that FedCLASS stands on reliable foundations,considering the scores of old classes predicted by historical models as conditional probabilities in the absence of new classes,and the scores of new classes predicted by current models as the conditional probabilities of class scores derived from historical models.Empirical experiments demonstrate the superiority of FedCLASS over four baseline algorithms in reducing average forgetting rate and boosting global accuracy.
基金supported by the National Natural Science Foundation of China(No.62006243)。
文摘Class-Incremental Few-Shot Named Entity Recognition(CIFNER)aims to identify entity categories that have appeared with only a few newly added(novel)class examples.However,existing class-incremental methods typically introduce new parameters to adapt to new classes and treat all information equally,resulting in poor generalization.Meanwhile,few-shot methods necessitate samples for all observed classes,making them difficult to transfer into a class-incremental setting.Thus,a decoupled two-phase framework method for the CIFNER task is proposed to address the above issues.The whole task is converted to two separate tasks named Entity Span Detection(ESD)and Entity Class Discrimination(ECD)that leverage parameter-cloning and label-fusion to learn different levels of knowledge separately,such as class-generic knowledge and class-specific knowledge.Moreover,different variants,such as the Conditional Random Field-based(CRF-based),word-pair-based methods in ESD module,and add-based,Natural Language Inference-based(NLI-based)and prompt-based methods in ECD module,are investigated to demonstrate the generalizability of the decoupled framework.Extensive experiments on the three Named Entity Recognition(NER)datasets reveal that our method achieves the state-of-the-art performance in the CIFNER setting.
基金supported in part by the National Key Research and Development Program of China under Grant 2021YFF1200600in part by the National Natural Science Foundation of China under Grant 62301523.
文摘Hand gesture recognition(HGR)plays a vital role in human-computer interaction.The integration of high-density surface electromyography(HD-sEMG)and deep neural networks(DNNs)has significantly improved the robustness and accuracy of HGR systems.These methods are typically effective for a fixed set of trained gestures.However,the need for new gesture classes over time poses a challenge.Introducing new classes to DNNs can lead to a substantial decrease in accuracy for previously learned tasks,a phenomenon known as“catastrophic forgetting,”especially when the training data for earlier tasks is not retained and retrained.This issue is exacerbated in embedded devices with limited storage,which struggle to store the large-scale data of HD-sEMG.Classincremental learning(CIL)is an effective method to reduce catastrophic forgetting.However,existing CIL methods for HGR rarely focus on reducing memory load.To address this,we propose a memory-friendly CIL method for HGR using HD-sEMG.Our approach includes a lightweight convolutional neural network,named SeparaNet,for feature representation learning,coupled with a nearest-mean-of-exemplars classifier for classifi-cation.We introduce a priority exemplar selection algorithm inspired by the herding effect to maintain a manageable set of exemplars during training.Furthermore,a task-equal-weight exemplar sampling strategy is proposed to effectively reduce memory load while preserving high recognition performance.Experimental results on two datasets demonstrate that our method significantly reduces the number of retained exemplars to only a quarter of that required by other CIL methods,accounting for less than 5%of the total samples,while still achieving comparable average accuracy.