Modern intelligent systems,such as autonomous vehicles and face recognition,must continuously adapt to new scenarios while preserving their ability to handle previously encountered situations.However,when neural netwo...Modern intelligent systems,such as autonomous vehicles and face recognition,must continuously adapt to new scenarios while preserving their ability to handle previously encountered situations.However,when neural networks learn new classes sequentially,they suffer from catastrophic forgetting—the tendency to lose knowledge of earlier classes.This challenge,which lies at the core of class-incremental learning,severely limits the deployment of continual learning systems in real-world applications with streaming data.Existing approaches,including rehearsalbased methods and knowledge distillation techniques,have attempted to address this issue but often struggle to effectively preserve decision boundaries and discriminative features under limited memory constraints.To overcome these limitations,we propose a support vector-guided framework for class-incremental learning.The framework integrates an enhanced feature extractor with a Support Vector Machine classifier,which generates boundary-critical support vectors to guide both replay and distillation.Building on this architecture,we design a joint feature retention strategy that combines boundary proximity with feature diversity,and a Support Vector Distillation Loss that enforces dual alignment in decision and semantic spaces.In addition,triple attention modules are incorporated into the feature extractor to enhance representation power.Extensive experiments on CIFAR-100 and Tiny-ImageNet demonstrate effective improvements.On CIFAR-100 and Tiny-ImageNet with 5 tasks,our method achieves 71.68%and 58.61%average accuracy,outperforming strong baselines by 3.34%and 2.05%.These advantages are consistently observed across different task splits,highlighting the robustness and generalization of the proposed approach.Beyond benchmark evaluations,the framework also shows potential in few-shot and resource-constrained applications such as edge computing and mobile robotics.展开更多
We propose that the core mass function(CMF)can be driven by filament fragmentation.To model a star-forming system of filaments and fibers,we develop a fractal and turbulent tree with a fractal dimension of 2 and a Lar...We propose that the core mass function(CMF)can be driven by filament fragmentation.To model a star-forming system of filaments and fibers,we develop a fractal and turbulent tree with a fractal dimension of 2 and a Larson's law exponent(β)of 0.5.The fragmentation driven by convergent flows along the splines of the fractal tree yields a Kroupa-IMF-like CMF that can be divided into three power-law segments with exponentsα=-0.5,-1.5,and-2,respectively.The turnover masses of the derived CMF are approximately four times those of the Kroupa IMF,corresponding to a star formation efficiency of 0.25.Adoptingβ=1/3,which leads to fractional Brownian motion along the filament,may explain a steeper CMF at the high-mass end,withα=-3.33 close to that of the Salpeter IMF.We suggest that the fibers of the tree are basic building blocks of star formation,with similar properties across different clouds,establishing a common density threshold for star formation and leading to a universal CMF.展开更多
基金supported by the Gansu Provincial Natural Science Foundation(grant number 25JRRA074)the Gansu Provincial Key R&D Science and Technology Program(grant number 24YFGA060)the National Natural Science Foundation of China(grant number 62161019).
文摘Modern intelligent systems,such as autonomous vehicles and face recognition,must continuously adapt to new scenarios while preserving their ability to handle previously encountered situations.However,when neural networks learn new classes sequentially,they suffer from catastrophic forgetting—the tendency to lose knowledge of earlier classes.This challenge,which lies at the core of class-incremental learning,severely limits the deployment of continual learning systems in real-world applications with streaming data.Existing approaches,including rehearsalbased methods and knowledge distillation techniques,have attempted to address this issue but often struggle to effectively preserve decision boundaries and discriminative features under limited memory constraints.To overcome these limitations,we propose a support vector-guided framework for class-incremental learning.The framework integrates an enhanced feature extractor with a Support Vector Machine classifier,which generates boundary-critical support vectors to guide both replay and distillation.Building on this architecture,we design a joint feature retention strategy that combines boundary proximity with feature diversity,and a Support Vector Distillation Loss that enforces dual alignment in decision and semantic spaces.In addition,triple attention modules are incorporated into the feature extractor to enhance representation power.Extensive experiments on CIFAR-100 and Tiny-ImageNet demonstrate effective improvements.On CIFAR-100 and Tiny-ImageNet with 5 tasks,our method achieves 71.68%and 58.61%average accuracy,outperforming strong baselines by 3.34%and 2.05%.These advantages are consistently observed across different task splits,highlighting the robustness and generalization of the proposed approach.Beyond benchmark evaluations,the framework also shows potential in few-shot and resource-constrained applications such as edge computing and mobile robotics.
基金support of the Strategic Priority Research Program of the Chinese Academy of Sciences under grant No.XDB0800303the National Key R&D Program of China under grant No.2022YFA1603100the National Natural Science Foundation of China(NSFC,Grant No.12203086)。
文摘We propose that the core mass function(CMF)can be driven by filament fragmentation.To model a star-forming system of filaments and fibers,we develop a fractal and turbulent tree with a fractal dimension of 2 and a Larson's law exponent(β)of 0.5.The fragmentation driven by convergent flows along the splines of the fractal tree yields a Kroupa-IMF-like CMF that can be divided into three power-law segments with exponentsα=-0.5,-1.5,and-2,respectively.The turnover masses of the derived CMF are approximately four times those of the Kroupa IMF,corresponding to a star formation efficiency of 0.25.Adoptingβ=1/3,which leads to fractional Brownian motion along the filament,may explain a steeper CMF at the high-mass end,withα=-3.33 close to that of the Salpeter IMF.We suggest that the fibers of the tree are basic building blocks of star formation,with similar properties across different clouds,establishing a common density threshold for star formation and leading to a universal CMF.