期刊文献+
共找到434篇文章
< 1 2 22 >
每页显示 20 50 100
The algorithm AE_(11) of learning from examples
1
作者 ZHANG Hai-yi BI Jian-dong 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2006年第2期226-232,共7页
We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final re... We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final result was fairly satisfying. 展开更多
关键词 learning from examples concept acquisition inductive learning knowledge acquisition
在线阅读 下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:24
2
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning Deep neural network Adversarial example Adversarial attack Adversarial defense
在线阅读 下载PDF
Deep learning for geometric and semantic tasks in photogrammetry and remote sensing 被引量:4
3
作者 Christian Heipke Franz Rottensteiner 《Geo-Spatial Information Science》 SCIE CSCD 2020年第1期10-19,共10页
During the last few years,artificial intelligence based on deep learning,and particularly based on convolutional neural networks,has acted as a game changer in just about all tasks related to photogrammetry and remote... During the last few years,artificial intelligence based on deep learning,and particularly based on convolutional neural networks,has acted as a game changer in just about all tasks related to photogrammetry and remote sensing.Results have shown partly significant improvements in many projects all across the photogrammetric processing chain from image orientation to surface reconstruction,scene classification as well as change detection,object extraction and object tracking and recognition in image sequences.This paper summarizes the foundations of deep learning for photogrammetry and remote sensing before illustrating,by way of example,different projects being carried out at the Institute of Photogrammetry and GeoInformation,Leibniz University Hannover,in this exciting and fast moving field of research and development. 展开更多
关键词 Deep learning machine learning convolutional neural networks(CNN) example project from IPI
原文传递
Learning the Spatiotemporal Evolution Law of Wave Field Based on Convolutional Neural Network 被引量:2
4
作者 LIU Xing GAO Zhiyi +1 位作者 HOU Fang SUN Jinggao 《Journal of Ocean University of China》 SCIE CAS CSCD 2022年第5期1109-1117,共9页
Research on the wave field evolution law is highly significant to the fields of offshore engineering and marine resource development.Numerical simulations have been conducted for high-precision wave field evolution,th... Research on the wave field evolution law is highly significant to the fields of offshore engineering and marine resource development.Numerical simulations have been conducted for high-precision wave field evolution,thus providing short-term wave field prediction.However,its evolution occurs over a long period of time,and its accuracy is difficult to improve.In recent years,the use of machine learning methods to study the evolution of wave field has received increasing attention from researchers.This paper proposes a wave field evolution method based on deep convolutional neural networks.This method can effectively correlate the spa-tiotemporal characteristics of wave data via convolution operation and directly obtain the offshore forecast results of the Bohai Sea and the Yellow Sea.The attention mechanism,multi-scale path design,and hard example mining training strategy are introduced to suppress the interference caused by Weibull distributed wave field data and improve the accuracy of the proposed wave field evolu-tion.The 72-and 480-h evolution experiment results in the Bohai Sea and the Yellow Sea show that the proposed method in this pa-per has excellent forecast accuracy and timeliness. 展开更多
关键词 wave evolution machine learning convolutional neural network hard example mining
在线阅读 下载PDF
Deep Learning Approach for COVID-19 Detection in Computed Tomography Images 被引量:2
5
作者 Mohamad Mahmoud Al Rahhal Yakoub Bazi +2 位作者 Rami M.Jomaa Mansour Zuair Naif Al Ajlan 《Computers, Materials & Continua》 SCIE EI 2021年第5期2093-2110,共18页
With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase ch... With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%. 展开更多
关键词 COVID-19 deep learning computed tomography multi-scale features atrous convolution adversarial examples
在线阅读 下载PDF
An LSTM-Based Malware Detection Using Transfer Learning 被引量:1
6
作者 Zhangjie Fu Yongjie Ding Musaazi Godfrey 《Journal of Cyber Security》 2021年第1期11-28,共18页
Mobile malware occupies a considerable proportion of cyberattacks.With the update of mobile device operating systems and the development of software technology,more and more new malware keep appearing.The emergence of... Mobile malware occupies a considerable proportion of cyberattacks.With the update of mobile device operating systems and the development of software technology,more and more new malware keep appearing.The emergence of new malware makes the identification accuracy of existing methods lower and lower.There is an urgent need for more effective malware detection models.In this paper,we propose a new approach to mobile malware detection that is able to detect newly-emerged malware instances.Firstly,we build and train the LSTM-based model on original benign and malware samples investigated by both static and dynamic analysis techniques.Then,we build a generative adversarial network to generate augmented examples,which can emulate the characteristics of newly-emerged malware.At last,we use the augmented examples to retrain the 4th and 5th layers of the LSTM network and the last fully connected layer so that it can discriminate against newly-emerged malware.Actual experiments show that our malware detection achieved a classification accuracy of 99.94%when tested on augmented samples and 86.5%with the samples of newly-emerged malware on real data. 展开更多
关键词 Malware detection long short term memory networks generative adversarial networks transfer learning augmented examples
在线阅读 下载PDF
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
7
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
在线阅读 下载PDF
Omni-Detection of Adversarial Examples with Diverse Magnitudes
8
作者 Ke Jianpeng Wang Wenqi +3 位作者 Yang Kang Wang Lina Ye Aoshuang Wang Run 《China Communications》 SCIE CSCD 2024年第12期139-151,共13页
Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plen... Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plenty of methods have been pro-posed to defend against adversarial examples.How-ever,the majority of them are suffering the follow-ing weaknesses:1)lack of generalization and prac-ticality.2)fail to deal with unknown attacks.To ad-dress the above issues,we design the adversarial na-ture eraser(ANE)and feature map detector(FMD)to detect fragile and high-intensity adversarial examples,respectively.Then,we apply the ensemble learning method to compose our detector,dealing with adver-sarial examples with diverse magnitudes in a divide-and-conquer manner.Experimental results show that our approach achieves 99.30%and 99.62%Area un-der Curve(AUC)scores on average when tested with various Lp norm-based attacks on CIFAR-10 and Im-ageNet,respectively.Furthermore,our approach also shows its potential in detecting unknown attacks. 展开更多
关键词 adversarial example detection ensemble learning feature maps fragile and high-intensity ad-versarial examples
在线阅读 下载PDF
N-gram MalGAN:Evading machine learning detection via feature n-gram
9
作者 Enmin Zhu Jianjie Zhang +2 位作者 Jijie Yan Kongyang Chen Chongzhi Gao 《Digital Communications and Networks》 SCIE CSCD 2022年第4期485-491,共7页
In recent years,many adversarial malware examples with different feature strategies,especially GAN and its variants,have been introduced to handle the security threats,e.g.,evading the detection of machine learning de... In recent years,many adversarial malware examples with different feature strategies,especially GAN and its variants,have been introduced to handle the security threats,e.g.,evading the detection of machine learning detectors.However,these solutions still suffer from problems of complicated deployment or long running time.In this paper,we propose an n-gram MalGAN method to solve these problems.We borrow the idea of n-gram from the Natural Language Processing(NLP)area to expand feature sources for adversarial malware examples in MalGAN.Generally,the n-gram MalGAN obtains the feature vector directly from the hexadecimal bytecodes of the executable file.It can be implemented easily and conveniently with a simple program language(e.g.,C++),with no need for any prior knowledge of the executable file or any professional feature extraction tools.These features are functionally independent and thus can be added to the non-functional area of the malicious program to maintain its original executability.In this way,the n-gram could make the adversarial attack easier and more convenient.Experimental results show that the evasion rate of the n-gram MalGAN is at least 88.58%to attack different machine learning algorithms under an appropriate group rate,growing to even 100%for the Random Forest algorithm. 展开更多
关键词 Machine learning N-GRAM MalGAN Adversarial examples
在线阅读 下载PDF
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
10
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
在线阅读 下载PDF
Adversarial Example Generation Method Based on Sensitive Features
11
作者 WEN Zerui SHEN Zhidong +1 位作者 SUN Hui QI Baiwen 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2023年第1期35-44,共10页
As deep learning models have made remarkable strides in numerous fields,a variety of adversarial attack methods have emerged to interfere with deep learning models.Adversarial examples apply a minute perturbation to t... As deep learning models have made remarkable strides in numerous fields,a variety of adversarial attack methods have emerged to interfere with deep learning models.Adversarial examples apply a minute perturbation to the original image,which is inconceivable to the human but produces a massive error in the deep learning model.Existing attack methods have achieved good results when the network structure is known.However,in the case of unknown network structures,the effectiveness of the attacks still needs to be improved.Therefore,transfer-based attacks are now very popular because of their convenience and practicality,allowing adversarial samples generated on known models to be used in attacks on unknown models.In this paper,we extract sensitive features by Grad-CAM and propose two single-step attacks methods and a multi-step attack method to corrupt sensitive features.In two single-step attacks,one corrupts the features extracted from a single model and the other corrupts the features extracted from multiple models.In multi-step attack,our method improves the existing attack method,thus enhancing the adversarial sample transferability to achieve better results on unknown models.Our method is also validated on CIFAR-10 and MINST,and achieves a 1%-3%improvement in transferability. 展开更多
关键词 deep learning model adversarial example transferability sensitive characteristics AI security
原文传递
A new method of constructing adversarial examplesfor quantum variational circuits
12
作者 颜金歌 闫丽丽 张仕斌 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第7期268-272,共5页
A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model wil... A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient. 展开更多
关键词 quantum variational circuit adversarial examples quantum machine learning quantum circuit
原文传递
Two-Stream Architecture as a Defense against Adversarial Example
13
作者 Hao Ge Xiao-Guang Tu +1 位作者 Mei Xie Zheng Ma 《Journal of Electronic Science and Technology》 CAS CSCD 2022年第1期81-91,共11页
The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images... The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images with such perturbations are called adversarial examples.They have been proven to be an indisputable threat to deep neural networks(DNNs)based applications,but DNNs have yet to be fully elucidated,consequently preventing the development of efficient defenses against adversarial examples.This study proposes a two-stream architecture to protect convolutional neural networks(CNNs)from attacks by adversarial examples.Our model applies the idea of“two-stream”used in the security field.Thus,it successfully defends different kinds of attack methods because of differences in“high-resolution”and“low-resolution”networks in feature extraction.This study experimentally demonstrates that our two-stream architecture is difficult to be defeated with state-of-the-art attacks.Our two-stream architecture is also robust to adversarial examples built by currently known attacking algorithms. 展开更多
关键词 Adversarial example deep learning neural network
在线阅读 下载PDF
Defending Adversarial Examples by a Clipped Residual U-Net Model
14
作者 Kazim Ali Adnan N.Qureshi +2 位作者 Muhammad Shahid Bhatti Abid Sohail Mohammad Hijji 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2237-2256,共20页
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu... Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model. 展开更多
关键词 Adversarial examples adversarial attacks defense method residual learning u-net cgan cru-et model
在线阅读 下载PDF
DroidEnemy: Battling adversarial example attacks for Android malware detection
15
作者 Neha Bala Aemun Ahmar +3 位作者 Wenjia Li Fernanda Tovar Arpit Battu Prachi Bambarkar 《Digital Communications and Networks》 SCIE CSCD 2022年第6期1040-1047,共8页
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are... In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks. 展开更多
关键词 Security Malware detection Adversarial example attack Data poisoning attack Evasi on attack Machine learning ANDROID
在线阅读 下载PDF
Control Task for Reinforcement Learning with Known Optimal Solution for Discrete and Continuous Actions
16
作者 Michael C. ROTTGER Andreas W. LIEHR 《Journal of Intelligent Learning Systems and Applications》 2009年第1期28-41,共14页
The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions dr... The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions drawn from continuous sets. This paper describes a simple control task called direction finder and its known optimal solution for both discrete and continuous actions. It allows for comparison of RL solution methods based on their value functions. In order to solve the control task for continuous actions, a simple idea for generalising them by means of feature vectors is presented. The resulting algorithm is applied using different choices of feature calculations. For comparing their performance a simple measure is 展开更多
关键词 comparison CONTINUOUS ACTIONS example problem REINFORCEMENT learning performance
暂未订购
A Survey on Adversarial Example
17
作者 Jiawei Zhang Jinwei Wang 《Journal of Information Hiding and Privacy Protection》 2020年第1期47-57,共11页
In recent years,deep learning has become a hotspot and core method in the field of machine learning.In the field of machine vision,deep learning has excellent performance in feature extraction and feature representati... In recent years,deep learning has become a hotspot and core method in the field of machine learning.In the field of machine vision,deep learning has excellent performance in feature extraction and feature representation,making it widely used in directions such as self-driving cars and face recognition.Although deep learning can solve large-scale complex problems very well,the latest research shows that the deep learning network model is very vulnerable to the adversarial attack.Add a weak perturbation to the original input will lead to the wrong output of the neural network,but for the human eye,the difference between origin images and disturbed images is hardly to be notice.In this paper,we summarize the research of adversarial examples in the field of image processing.Firstly,we introduce the background and representative models of deep learning,then introduce the main methods of the generation of adversarial examples and how to defend against adversarial attack,finally,we put forward some thoughts and future prospects for adversarial examples. 展开更多
关键词 Neural network deep learning adversarial example SURVEY
在线阅读 下载PDF
基于集成学习与异常检测的对抗流量检测方法
18
作者 董方和 石琼 师智斌 《计算机工程》 北大核心 2026年第2期275-286,共12页
近年来,深度学习技术在恶意流量检测方面的应用越来越广泛。然而,对抗样本攻击给基于深度学习的恶意流量检测带来了巨大挑战。针对这一问题,提出一种基于集成学习与异常检测的对抗流量检测方法,用于发现针对恶意流量检测系统的对抗样本... 近年来,深度学习技术在恶意流量检测方面的应用越来越广泛。然而,对抗样本攻击给基于深度学习的恶意流量检测带来了巨大挑战。针对这一问题,提出一种基于集成学习与异常检测的对抗流量检测方法,用于发现针对恶意流量检测系统的对抗样本攻击。首先,为每一类恶意流量类别训练一个二分类集成学习器。对于集成学习器的每一个基模型,采用不同数据子集和特征子集训练,扩大基模型之间的差异性,以增加对抗样本跨越所有模型决策边界的难度。其次,将不同二分类集成学习器中基模型预测输入样本为正常样本的比例作为集成学习模型的信心得分,并将不同二分类集成学习器的信心得分输入孤立森林模型,通过孤立森林模型进行异常检测获得异常得分。最后,将获得的异常得分与在正常样本上获得的异常得分的阈值进行比较,判断样本是否为对抗样本。实验结果表明,该方法在NSL-KDD和CICIDS2017数据集的特征空间和受限空间上分别取得了最高0.986 9、0.989 6、0.999 1、0.999 8的受试者工作特征曲线下面积(AUC)值,优于对比方法。 展开更多
关键词 对抗样本 网络入侵检测 对抗检测 集成学习 异常检测
在线阅读 下载PDF
基于双重引导的目标对抗攻击方法
19
作者 孙月 张兴兰 《浙江大学学报(工学版)》 北大核心 2026年第1期81-89,共9页
为了提升目标对抗样本的迁移性能,提出基于目标类别印象和正则化对抗样本双重引导的生成式对抗攻击方法.利用UNet模型的跳跃连接机制生成浅层特征的对抗扰动,增强对抗样本的攻击性.将目标类别的类印象图和标签作为输入,引导生成器生成... 为了提升目标对抗样本的迁移性能,提出基于目标类别印象和正则化对抗样本双重引导的生成式对抗攻击方法.利用UNet模型的跳跃连接机制生成浅层特征的对抗扰动,增强对抗样本的攻击性.将目标类别的类印象图和标签作为输入,引导生成器生成含有目标类别特征的对抗扰动,提高目标攻击成功率.在训练阶段对生成的对抗扰动使用Dropout技术,降低生成器对替代模型的依赖,以提升对抗样本的泛化性能.实验结果表明,在MNIST、CIFAR10以及SVHN数据集上,所提方法生成的对抗样本在ResNet18、DenseNet等分类模型上均有较好的目标迁移攻击效果,平均黑盒目标攻击成功率比基准攻击方法 MIM提高了1.6%以上,说明所提方法生成的对抗样本可以更有效地评估深度模型的鲁棒性. 展开更多
关键词 深度学习 对抗攻击 对抗样本 黑盒攻击 目标攻击
在线阅读 下载PDF
计算机视觉对抗攻击研究综述
20
作者 秦颖鑫 张可佳 +1 位作者 潘海为 巨亚昊 《计算机工程》 北大核心 2026年第2期46-68,共23页
深度学习引领人工智能蓬勃发展,被广泛用于计算机视觉,在图像识别、目标检测、目标跟踪、人脸识别等复杂任务上取得了突破性进展和显著的成果,展现出其卓越的识别和预测能力。但深度学习模型的脆弱性和漏洞也逐渐暴露,以卷积神经网络为... 深度学习引领人工智能蓬勃发展,被广泛用于计算机视觉,在图像识别、目标检测、目标跟踪、人脸识别等复杂任务上取得了突破性进展和显著的成果,展现出其卓越的识别和预测能力。但深度学习模型的脆弱性和漏洞也逐渐暴露,以卷积神经网络为代表的深度学习技术对精心设计的对抗样本极为敏感,容易对模型的安全性和隐私性造成影响。首先,总结对抗攻击的概念、对抗样本产生的原因以及相关术语,概述数字域和物理域中几类经典的对抗攻击策略,对其优缺点进行分析;其次,专注计算机视觉,从数字域和物理域两个方面分别总结目标检测、人脸识别、目标跟踪、单目深度估计、光流估计中对抗攻击的最新研究进展以及常用于研究的各种数据集,简单介绍现阶段对抗样本的防御和检测方法,归纳对抗样本防御和检测方法的优缺点,阐述不同视觉任务对抗样本防御的应用实例;最后,基于对抗攻击方法的总结,探索并分析现有计算机视觉对抗攻击的不足和挑战。 展开更多
关键词 深度学习 计算机视觉 对抗攻击 数字域 物理域 对抗样本
在线阅读 下载PDF
上一页 1 2 22 下一页 到第
使用帮助 返回顶部