期刊文献+
共找到51篇文章
< 1 2 3 >
每页显示 20 50 100
Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
1
作者 Yaoxuan Zhu Hua Yang Bin Zhu 《Computers, Materials & Continua》 2025年第2期1947-1968,共22页
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura... The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples. 展开更多
关键词 Image classification convolutional neural network natural adversarial example data set defense against adversarial examples
在线阅读 下载PDF
A Survey of Adversarial Examples in Computer Vision:Attack,Defense,and Beyond
2
作者 XU Keyizhi LU Yajuan +1 位作者 WANG Zhongyuan LIANG Chao 《Wuhan University Journal of Natural Sciences》 2025年第1期1-20,共20页
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca... Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field. 展开更多
关键词 computer vision adversarial examples adversarial attack adversarial defense
原文传递
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
3
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
在线阅读 下载PDF
Omni-Detection of Adversarial Examples with Diverse Magnitudes
4
作者 Ke Jianpeng Wang Wenqi +3 位作者 Yang Kang Wang Lina Ye Aoshuang Wang Run 《China Communications》 SCIE CSCD 2024年第12期139-151,共13页
Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plen... Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plenty of methods have been pro-posed to defend against adversarial examples.How-ever,the majority of them are suffering the follow-ing weaknesses:1)lack of generalization and prac-ticality.2)fail to deal with unknown attacks.To ad-dress the above issues,we design the adversarial na-ture eraser(ANE)and feature map detector(FMD)to detect fragile and high-intensity adversarial examples,respectively.Then,we apply the ensemble learning method to compose our detector,dealing with adver-sarial examples with diverse magnitudes in a divide-and-conquer manner.Experimental results show that our approach achieves 99.30%and 99.62%Area un-der Curve(AUC)scores on average when tested with various Lp norm-based attacks on CIFAR-10 and Im-ageNet,respectively.Furthermore,our approach also shows its potential in detecting unknown attacks. 展开更多
关键词 adversarial example detection ensemble learning feature maps fragile and high-intensity ad-versarial examples
在线阅读 下载PDF
A Survey on Adversarial Examples in Deep Learning 被引量:3
5
作者 Kai Chen Haoqi Zhu +1 位作者 Leiming Yan Jinwei Wang 《Journal on Big Data》 2020年第2期71-84,共14页
Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial ex... Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples.This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples,the occurrences of the adversarial examples,the attacking methods of adversarial examples.This article lists the possible reasons for the adversarial examples.This article also analyzes several typical generation methods of adversarial examples in detail:Limited-memory BFGS(L-BFGS),Fast Gradient Sign Method(FGSM),Basic Iterative Method(BIM),Iterative Least-likely Class Method(LLC),etc.Furthermore,in the perspective of the attack methods and reasons of the adversarial examples,the main defense techniques for the adversarial examples are listed:preprocessing,regularization and adversarial training method,distillation method,etc.,which application scenarios and deficiencies of different defense measures are pointed out.This article further discusses the application of adversarial examples which currently is mainly used in adversarial evaluation and adversarial training.Finally,the overall research direction of the adversarial examples is prospected to completely solve the adversarial attack problem.There are still a lot of practical and theoretical problems that need to be solved.Finding out the characteristics of the adversarial examples,giving a mathematical description of its practical application prospects,exploring the universal method of adversarial example generation and the generation mechanism of the adversarial examples are the main research directions of the adversarial examples in the future. 展开更多
关键词 adversarial examples generation methods defense methods
在线阅读 下载PDF
A new method of constructing adversarial examplesfor quantum variational circuits
6
作者 颜金歌 闫丽丽 张仕斌 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第7期268-272,共5页
A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model wil... A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient. 展开更多
关键词 quantum variational circuit adversarial examples quantum machine learning quantum circuit
原文传递
Defending Adversarial Examples by a Clipped Residual U-Net Model
7
作者 Kazim Ali Adnan N.Qureshi +2 位作者 Muhammad Shahid Bhatti Abid Sohail Mohammad Hijji 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2237-2256,共20页
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu... Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model. 展开更多
关键词 adversarial examples adversarial attacks defense method residual learning u-net cgan cru-et model
在线阅读 下载PDF
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
8
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
在线阅读 下载PDF
Adversarial Examples Protect Your Privacy on Speech Enhancement System
9
作者 Mingyu Dong Diqun Yan Rangding Wang 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期1-12,共12页
Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automa... Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automatic speech recognition(ASR)technology by some applications on phone devices.To guarantee that the recognized speech content is accurate,speech enhancement technology is used to denoise the input speech.Speech enhancement technology has developed rapidly along with deep neural networks(DNNs),but adversarial examples can cause DNNs to fail.Considering that the vulnerability of DNN can be used to protect the privacy in speech.In this work,we propose an adversarial method to degrade speech enhancement systems,which can prevent the malicious extraction of private information in speech.Experimental results show that the generated enhanced adversarial examples can be removed most content of the target speech or replaced with target speech content by speech enhancement.The word error rate(WER)between the enhanced original example and enhanced adversarial example recognition result can reach 89.0%.WER of target attack between enhanced adversarial example and target example is low at 33.75%.The adversarial perturbation in the adversarial example can bring much more change than itself.The rate of difference between two enhanced examples and adversarial perturbation can reach more than 1.4430.Meanwhile,the transferability between different speech enhancement models is also investigated.The low transferability of the method can be used to ensure the content in the adversarial example is not damaged,the useful information can be extracted by the friendly ASR.This work can prevent the malicious extraction of speech. 展开更多
关键词 adversarial example speech enhancement privacy protection deep neural network
在线阅读 下载PDF
Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer
10
作者 Xiaoyin Yi Long Chen +2 位作者 Jiacheng Huang Ning Yu Qian Huang 《Computers, Materials & Continua》 2025年第4期157-175,共19页
Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they re... Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques. 展开更多
关键词 adversarial examples black-box transferability regularized constrained transfer-based adversarial attacks
在线阅读 下载PDF
Low-rank matrix recovery with total generalized variation for defending adversarial examples
11
作者 Wen LI Hengyou WANG +4 位作者 Lianzhi HUO Qiang HE Linlin CHEN Zhiquan HE Wing W.Y.Ng 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第3期432-445,共14页
Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we app... Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we apply it to improve the robustness of deep neural networks.However,although TV regularization can improve the robustness of the model,it reduces the accuracy of normal samples due to its over-smoothing.In our work,we develop a new low-rank matrix recovery model,called LRTGV,which incorporates total generalized variation(TGV)regularization into the reweighted low-rank matrix recovery model.In the proposed model,TGV is used to better reconstruct texture information without over-smoothing.The reweighted nuclear norm and Li-norm can enhance the global structure information.Thus,the proposed LRTGV can destroy the structure of adversarial noise while re-enhancing the global structure and local texture of the image.To solve the challenging optimal model issue,we propose an algorithm based on the alternating direction method of multipliers.Experimental results show that the proposed algorithm has a certain defense capability against black-box attacks,and outperforms state-of-the-art low-rank matrix recovery methods in image restoration. 展开更多
关键词 Total generalized variation Low-rank matrix Alternating direction method of multipliers adversarial example
原文传递
Incomplete Physical Adversarial Attack on Face Recognition
12
作者 HU Weitao XU Wujun 《Journal of Donghua University(English Edition)》 2025年第4期442-448,共7页
In recent work,adversarial stickers are widely used to attack face recognition(FR)systems in the physical world.However,it is difficult to evaluate the performance of physical attacks because of the lack of volunteers... In recent work,adversarial stickers are widely used to attack face recognition(FR)systems in the physical world.However,it is difficult to evaluate the performance of physical attacks because of the lack of volunteers in the experiment.In this paper,a simple attack method called incomplete physical adversarial attack(IPAA)is proposed to simulate physical attacks.Different from the process of physical attacks,when an IPAA is conducted,a photo of the adversarial sticker is embedded into a facial image as the input to attack FR systems,which can obtain results similar to those of physical attacks without inviting any volunteers.The results show that IPAA has a higher similarity with physical attacks than digital attacks,indicating that IPAA is able to evaluate the performance of physical attacks.IPAA is effective in quantitatively measuring the impact of the sticker location on the results of attacks. 展开更多
关键词 physical attack digital attack face recognition interferential variable adversarial example
在线阅读 下载PDF
Black⁃box adversarial attacks with imperceptible fake user profiles for recommender systems
13
作者 Qian Fulan Liu Jinggang +3 位作者 Chen Hai Chen Wenbin Zhao Shu Zhang Yanping 《南京大学学报(自然科学版)》 CSCD 北大核心 2024年第6期881-899,共19页
Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attacke... Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attackers to obtain detailed knowledge of the target model in actual scenarios,so using gradient optimization to generate adversarial samples in the local surrogate model has become an effective black‐box attack strategy.However,these methods suffer from gradients falling into local minima,limiting the transferability of the adversarial samples.This reduces the attack's effectiveness and often ignores the imperceptibility of the generated adversarial samples.To address these challenges,we propose a novel attack algorithm called PGMRS‐KL that combines pre‐gradient‐guided momentum gradient optimization strategy and fake user generation constrained by Kullback‐Leibler divergence.Specifically,the algorithm combines the accumulated gradient direction with the previous step's gradient direction to iteratively update the adversarial samples.It uses KL loss to minimize the distribution distance between fake and real user data,achieving high transferability and imperceptibility of the adversarial samples.Experimental results demonstrate the superiority of our approach over state‐of‐the‐art gradient‐based attack algorithms in terms of attack transferability and the generation of imperceptible fake user data. 展开更多
关键词 recommendation systems adversarial examples transferability imperceptible
在线阅读 下载PDF
LSGAN‑AT:enhancing malware detector robustness against adversarial examples 被引量:1
14
作者 Jianhua Wang Xiaolin Chang +2 位作者 Yixiang Wang Ricardo JRodríguez Jianan Zhang 《Cybersecurity》 EI CSCD 2021年第1期594-608,共15页
Adversarial Malware Example(AME)-based adversarial training can effectively enhance the robustness of Machine Learning(ML)-based malware detectors against AME.AME quality is a key factor to the robustness enhancement.... Adversarial Malware Example(AME)-based adversarial training can effectively enhance the robustness of Machine Learning(ML)-based malware detectors against AME.AME quality is a key factor to the robustness enhancement.Generative Adversarial Network(GAN)is a kind of AME generation method,but the existing GAN-based AME generation methods have the issues of inadequate optimization,mode collapse and training instability.In this paper,we propose a novel approach(denote as LSGAN-AT)to enhance ML-based malware detector robustness against Adversarial Examples,which includes LSGAN module and AT module.LSGAN module can generate more effective and smoother AME by utilizing brand-new network structures and Least Square(LS)loss to optimize boundary samples.AT module makes adversarial training using AME generated by LSGAN to generate ML-based Robust Malware Detector(RMD).Extensive experiment results validate the better transferability of AME in terms of attacking 6 ML detectors and the RMD transferability in terms of resisting the MalGAN black-box attack.The results also verify the performance of the generated RMD in the recognition rate of AME. 展开更多
关键词 adversarial malware example Generative adversarial network Machine learning Malware detector Transferability
原文传递
LSGAN-AT:enhancing malware detector robustness against adversarial examples
15
作者 Jianhua Wang Xiaolin Chang +2 位作者 Yixiang Wang Ricardo J.Rodriguez Jianan Zhang 《Cybersecurity》 EI CSCD 2022年第1期94-108,共15页
Adversarial Malware Example(AME)-based adversarial training can effectively enhance the robustness of Machine Learning(ML)-based malware detectors against AME.AME quality is a key factor to the robustness enhancement.... Adversarial Malware Example(AME)-based adversarial training can effectively enhance the robustness of Machine Learning(ML)-based malware detectors against AME.AME quality is a key factor to the robustness enhancement.Generative Adversarial Network(GAN)is a kind of AME generation method,but the existing GAN-based AME generation methods have the issues of inadequate optimization,mode collapse and training instability.In this paper,we propose a novel approach(denote as LSGAN-AT)to enhance ML-based malware detector robustness against Adversarial Examples,which includes LSGAN module and AT module.LSGAN module can generate more effective and smoother AME by utilizing brand-new network structures and Least Square(LS)loss to optimize boundary samples.AT module makes adversarial training using AME generated by LSGAN to generate ML-based Robust Malware Detector(RMD).Extensive experiment results validate the better transferability of AME in terms of attacking 6 ML detectors and the RMD transferability in terms of resisting the MalGAN black-box attack.The results also verify the performance of the generated RMD in the recognition rate of AME. 展开更多
关键词 adversarial malware example Generative adversarial network Machine learning Malware detector Transferability
原文传递
Improving Transferability Reversible Adversarial Examples Based on Flipping Transformation
16
作者 Youqing Fang Jingwen Jia +1 位作者 Yuhai Yang Wanli Lyu 《国际计算机前沿大会会议论文集》 EI 2023年第1期417-432,共16页
Adding subtle perturbations to an image can cause the classification model to misclassify,and such images are called adversarial examples.Adversar-ial examples threaten the safe use of deep neural networks,but when com... Adding subtle perturbations to an image can cause the classification model to misclassify,and such images are called adversarial examples.Adversar-ial examples threaten the safe use of deep neural networks,but when combined with reversible data hiding(RDH)technology,they can protect images from being correctly identified by unauthorized models and recover the image lossless under authorized models.Based on this,the reversible adversarial example(RAE)is ris-ing.However,existing RAE technology focuses on feasibility,attack success rate and image quality,but ignores transferability and time complexity.In this paper,we optimize the data hiding structure and combine data augmentation technology,whichflips the input image in probability to avoid overfitting phenomenon on the dataset.On the premise of maintaining a high success rate of white-box attacks and the image’s visual quality,the proposed method improves the transferability of reversible adversarial examples by approximately 16%and reduces the com-putational cost by approximately 43%compared to the state-of-the-art method.In addition,the appropriateflip probability can be selected for different application scenarios. 展开更多
关键词 reversible adversarial example black-box attack transferability COMPLEXITY
原文传递
Attention-Guided Sparse Adversarial Attacks with Gradient Dropout
17
作者 ZHAO Hongzhi HAO Lingguang +2 位作者 HAO Kuangrong WEI Bing LIU Xiaoyan 《Journal of Donghua University(English Edition)》 CAS 2024年第5期545-556,共12页
Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based att... Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation. 展开更多
关键词 deep neural network adversarial attack sparse adversarial attack adversarial transferability adversarial example
在线阅读 下载PDF
A Study on Filter-Based Adversarial Image Classification Models
18
作者 Zhongcheng Zhao 《Journal of Contemporary Educational Research》 2024年第11期245-256,共12页
In view of the fact that adversarial examples can lead to high-confidence erroneous outputs of deep neural networks,this study aims to improve the safety of deep neural networks by distinguishing adversarial examples.... In view of the fact that adversarial examples can lead to high-confidence erroneous outputs of deep neural networks,this study aims to improve the safety of deep neural networks by distinguishing adversarial examples.A classification model based on filter residual network structure is used to accurately classify adversarial examples.The filter-based classification model includes residual network feature extraction and classification modules,which are iteratively optimized by an adversarial training strategy.Three mainstream adversarial attack methods are improved,and adversarial samples are generated on the Mini-ImageNet dataset.Subsequently,these samples are used to attack the EfficientNet and the filter-based classification model respectively,and the attack effects are compared.Experimental results show that the filter-based classification model has high classification accuracy when dealing with Mini-ImageNet adversarial examples.Adversarial training can effectively enhance the robustness of deep neural network models. 展开更多
关键词 adversarial example Image classification adversarial attack
在线阅读 下载PDF
SAMI-FGSM:Towards Transferable Attacks with Stochastic Gradient Accumulation
19
作者 Haolang Feng Yuling Chen +2 位作者 Yang Huang Xuewei Wang Haiwei Sang 《Computers, Materials & Continua》 2025年第9期4469-4490,共22页
Deep neural networks remain susceptible to adversarial examples,where the goal of an adversarial attack is to introduce small perturbations to the original examples in order to confuse the model without being easily d... Deep neural networks remain susceptible to adversarial examples,where the goal of an adversarial attack is to introduce small perturbations to the original examples in order to confuse the model without being easily detected.Although many adversarial attack methods produce adversarial examples that have achieved great results in the whitebox setting,they exhibit low transferability in the black-box setting.In order to improve the transferability along the baseline of the gradient-based attack technique,we present a novel Stochastic Gradient Accumulation Momentum Iterative Attack(SAMI-FGSM)in this study.In particular,during each iteration,the gradient information is calculated using a normal sampling approach that randomly samples around the sample points,with the highest probability of capturing adversarial features.Meanwhile,the accumulated information of the sampled gradient from the previous iteration is further considered to modify the current updated gradient,and the original gradient attack direction is changed to ensure that the updated gradient direction is more stable.Comprehensive experiments conducted on the ImageNet dataset show that our method outperforms existing state-of-the-art gradient-based attack techniques,achieving an average improvement of 10.2%in transferability. 展开更多
关键词 adversarial examples normal sampling gradient accumulation adversarial transferability
在线阅读 下载PDF
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks 被引量:1
20
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部