期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
DLP:towards active defense against backdoor attacks with decoupled learning process
1
作者 zonghao ying Bin Wu 《Cybersecurity》 EI CSCD 2024年第1期122-134,共13页
Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively impl... Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively implant the backdoor,and triggers on the input can mislead the models during testing.Our study shows that the model shows different learning behaviors in clean and poisoned subsets during training.Based on this observation,we propose a general training pipeline to defend against backdoor attacks actively.Benign models can be trained from the unreli-able dataset by decoupling the learning process into three stages,i.e.,supervised learning,active unlearning,and active semi-supervised fine-tuning.The effectiveness of our approach has been shown in numerous experiments across various backdoor attacks and datasets. 展开更多
关键词 Deep learning Backdoor attack Active defense
原文传递
NBA:defensive distillation for backdoor removal via neural behavior alignment
2
作者 zonghao ying Bin Wu 《Cybersecurity》 EI CSCD 2023年第4期76-87,共12页
Recently,deep neural networks have been shown to be vulnerable to backdoor attacks.A backdoor is inserted into neural networks via this attack paradigm,thus compromising the integrity of the network.As soon as an atta... Recently,deep neural networks have been shown to be vulnerable to backdoor attacks.A backdoor is inserted into neural networks via this attack paradigm,thus compromising the integrity of the network.As soon as an attacker presents a trigger during the testing phase,the backdoor in the model is activated,allowing the network to make specific wrong predictions.It is extremely important to defend against backdoor attacks since they are very stealthy and dangerous.In this paper,we propose a novel defense mechanism,Neural Behavioral Alignment(NBA),for backdoor removal.NBA optimizes the distillation process in terms of knowledge form and distillation samples to improve defense performance according to the characteristics of backdoor defense.NBA builds high-level representations of neural behavior within networks in order to facilitate the transfer of knowledge.Additionally,NBA crafts pseudo samples to induce student models exhibit backdoor neural behavior.By aligning the backdoor neural behavior from the student network with the benign neural behavior from the teacher network,NBA enables the proactive removal of backdoors.Extensive experiments show that NBA can effectively defend against six different backdoor attacks and outperform five state-of-the-art defenses. 展开更多
关键词 Deep neural network Backdoor removal Knowledge distillation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部