期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
1
作者 Yaoxuan Zhu Hua Yang Bin Zhu 《Computers, Materials & Continua》 2025年第2期1947-1968,共22页
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura... The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples. 展开更多
关键词 Image classification convolutional neural network natural adversarial example data set defense against adversarial examples
在线阅读 下载PDF
A divide-and-conquer reconstruction method for defending against adversarial example attacks
2
作者 Xiyao Liu Jiaxin Hu +3 位作者 Qingying Yang Ming Jiang Jianbiao He Hui Fang 《Visual Intelligence》 2024年第1期360-376,共17页
In recent years,defending against adversarial examples has gained significant importance,leading to a growing body of research in this area.Among these studies,pre-processing defense approaches have emerged as a promi... In recent years,defending against adversarial examples has gained significant importance,leading to a growing body of research in this area.Among these studies,pre-processing defense approaches have emerged as a prominent research direction.However,existing adversarial example pre-processing techniques often employ a single pre-processing model to counter different types of adversarial attacks.Such a strategy may miss the nuances between different types of attacks,limiting the comprehensiveness and effectiveness of the defense strategy.To address this issue,we propose a divide-and-conquer reconstruction pre-processing algorithm via multi-classification and multi-network training to more effectively defend against different types of mainstream adversarial attacks.The premise and challenge of the divide-and-conquer reconstruction defense is to distinguish between multiple types of adversarial attacks.Our method designs an adversarial attack classification module that exploits the high-frequency information differences between different types of adversarial examples for their multi-classification,which can hardly be achieved by existing adversarial example detection methods.In addition,we construct a divide-and-conquer reconstruction module that utilizes different trained image reconstruction models for each type of adversarial attack,ensuring optimal defense effectiveness.Extensive experiments show that our proposed divide-and-conquer defense algorithm exhibits superior performance compared to state-of-the-art pre-processing methods. 展开更多
关键词 adversarial example defense Divide-and-conquer strategy adversarial attack multi-classification Reconstruction network
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部