期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
ProbsCut:enhancing adversarial robustness via global probability constraints
1
作者 keji han Yao GE Yun LI 《Frontiers of Computer Science》 2026年第4期163-165,共3页
1 Introduction Although deep neural networks(DNNs)have made groundbreaking progress in various machine learning domains,their practical implementation is still significantly impeded by adversarial vulnerability[1].Adv... 1 Introduction Although deep neural networks(DNNs)have made groundbreaking progress in various machine learning domains,their practical implementation is still significantly impeded by adversarial vulnerability[1].Adversarial training,the primary approach to enhance the adversarial robustness of DNNs,augments the training set with adversarial examples and applies adversarial regularization loss to improve robustness[2].However,finding models that achieve a reasonable trade-off between accuracy and robustness remains an unresolved challenge.In this paper,we propose the adoption of global probability constraints to stabilize model decision-making.Our contributions can be summarized as follows. 展开更多
关键词 training set deep neural networks dnns adversarial regularization loss adversarial vulnerability adversarial trainingthe adversarial robustness machine learning enhance adversarial robustness global probability constraints
原文传递
A Cascade Model-Aware Generative Adversarial Example Detection Method
2
作者 keji han Yun Li Bin Xia 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2021年第6期800-812,共13页
Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial tr... Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial training method, the adversarial example detection algorithms check whether the specific example is adversarial, which is promising to solve the issue of the adversarial example. However, among the existing methods,model-aware detection methods do not generalize well, while the detection accuracies of the generative-based methods are lower compared to the model-aware methods. In this paper, we propose a cascade model-aware generative adversarial example detection method, namely CMAG. CMAG consists of two first-order reconstructors and a second-order reconstructor, which can illustrate what the model sees to the human by reconstructing the logit and feature maps of the last convolution layer. Experimental results demonstrate that our method is effective and is more interpretable compared to some state-of-the-art methods. 展开更多
关键词 information security Deep Neural Network(DNN) adversarial example detection
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部