期刊文献+
共找到189篇文章
< 1 2 10 >
每页显示 20 50 100
Adversarial Examples Protect Your Privacy on Speech Enhancement System
1
作者 Mingyu Dong Diqun Yan Rangding Wang 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期1-12,共12页
Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automa... Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automatic speech recognition(ASR)technology by some applications on phone devices.To guarantee that the recognized speech content is accurate,speech enhancement technology is used to denoise the input speech.Speech enhancement technology has developed rapidly along with deep neural networks(DNNs),but adversarial examples can cause DNNs to fail.Considering that the vulnerability of DNN can be used to protect the privacy in speech.In this work,we propose an adversarial method to degrade speech enhancement systems,which can prevent the malicious extraction of private information in speech.Experimental results show that the generated enhanced adversarial examples can be removed most content of the target speech or replaced with target speech content by speech enhancement.The word error rate(WER)between the enhanced original example and enhanced adversarial example recognition result can reach 89.0%.WER of target attack between enhanced adversarial example and target example is low at 33.75%.The adversarial perturbation in the adversarial example can bring much more change than itself.The rate of difference between two enhanced examples and adversarial perturbation can reach more than 1.4430.Meanwhile,the transferability between different speech enhancement models is also investigated.The low transferability of the method can be used to ensure the content in the adversarial example is not damaged,the useful information can be extracted by the friendly ASR.This work can prevent the malicious extraction of speech. 展开更多
关键词 Adversarial example speech enhancement privacy protection deep neural network
在线阅读 下载PDF
A Survey of Adversarial Examples in Computer Vision:Attack,Defense,and Beyond 被引量:2
2
作者 XU Keyizhi LU Yajuan +1 位作者 WANG Zhongyuan LIANG Chao 《Wuhan University Journal of Natural Sciences》 2025年第1期1-20,共20页
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca... Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field. 展开更多
关键词 computer vision adversarial examples adversarial attack adversarial defense
原文传递
Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
3
作者 Yaoxuan Zhu Hua Yang Bin Zhu 《Computers, Materials & Continua》 2025年第2期1947-1968,共22页
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura... The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples. 展开更多
关键词 Image classification convolutional neural network natural adversarial example data set defense against adversarial examples
在线阅读 下载PDF
DEMGAN: A Machine Learning-Based Intrusion Detection System Evasion Scheme
4
作者 Dawei Xu Yue Lv +3 位作者 Min Wang Baokun Zheng Jian Zhao Jiaxuan Yu 《Computers, Materials & Continua》 2025年第7期1731-1746,共16页
Network intrusion detection systems(IDS)are a prevalent method for safeguarding network traffic against attacks.However,existing IDS primarily depend on machine learning(ML)models,which are vulnerable to evasion throu... Network intrusion detection systems(IDS)are a prevalent method for safeguarding network traffic against attacks.However,existing IDS primarily depend on machine learning(ML)models,which are vulnerable to evasion through adversarial examples.In recent years,the Wasserstein Generative Adversarial Network(WGAN),based on Wasserstein distance,has been extensively utilized to generate adversarial examples.Nevertheless,several challenges persist:(1)WGAN experiences the mode collapse problem when generating multi-category network traffic data,leading to subpar quality and insufficient diversity in the generated data;(2)Due to unstable training processes,the authenticity of the data produced by WGAN is often low.This study improves WGAN to address these issues and proposes a new adversarial sample generation algorithm called Distortion Enhanced Multi-Generator Generative Adversarial Network(DEMGAN).DEMGAN effectively evades ML-based IDS by proficiently obfuscating network traffic data samples.We assess the efficacy of our attack method against five ML-based IDS using two public datasets.The results demonstrate that our method can successfully bypass IDS,achieving average evasion rates of 97.42%and 87.51%,respectively.Furthermore,empirical findings indicate that retraining the IDS with the generated adversarial samples significantly bolsters the system’s capability to detect adversarial samples,resulting in an average recognition rate increase of 86.78%.This approach not only enhances the performance of the IDS but also strengthens the network’s resilience against potential threats,thereby optimizing network security measures. 展开更多
关键词 Adversarial attacks intrusion detection adversarial traffic examples DEMGAN
在线阅读 下载PDF
Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer
5
作者 Xiaoyin Yi Long Chen +2 位作者 Jiacheng Huang Ning Yu Qian Huang 《Computers, Materials & Continua》 2025年第4期157-175,共19页
Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they re... Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques. 展开更多
关键词 Adversarial examples black-box transferability regularized constrained transfer-based adversarial attacks
在线阅读 下载PDF
英文论文中“suchas,forexample,e.g.,i.e.,etc.,et al.”的用法分析 被引量:3
6
作者 黄龙旺 龚汉忠 《编辑学报》 CSSCI 北大核心 2008年第2期124-124,共1页
关键词 SUCH as for example e.g. i.e. etc. ET a1. 用法分析
在线阅读 下载PDF
Selection of Example Varieties Used in the DUS Test Guideline of Tagetes L. 被引量:1
7
作者 刘艳芳 张建华 +6 位作者 王烨 陈海荣 徐云 杨晓洪 张惠 管俊娇 王江民 《Agricultural Science & Technology》 CAS 2012年第10期2110-2111,2116,共3页
[Objective] Taking the characteristic of flower diameter of Tagetes L.as an example,this study aimed to select example varieties used in the DUS Test Guideline of Tagetes L.[Method] Two continuous years of measurement... [Objective] Taking the characteristic of flower diameter of Tagetes L.as an example,this study aimed to select example varieties used in the DUS Test Guideline of Tagetes L.[Method] Two continuous years of measurements of flower diameter of 25 varieties were collected and then analyzed by using the box plot to illustrate the uniformity and stability of flower diameter of each variety.[Result] According to the information of variability,distribution symmetry of measurements and outliers of flower diameter of varieties provided by box plots,variety 16,2 and 4 were selected as the example varieties for the three expression states with respective flower diameter of 3.0-4.4,6.0-7.4 and 9.0-10.4 cm.[Conclusion] The box plot is an efficient method for the general analysis of varieties,which provides information covering the actual and possible expression range,median and outliers of measurements of flower diameter of each variety.It also provides references for selecting example varieties for other quantitative characteristics and evaluating the quality of varieties. 展开更多
关键词 DUS Test Guideline of Tagetes L. Quantitative characteristic example variety The box plot
在线阅读 下载PDF
基于语料库的学习者例证词“for example”对比研究 被引量:1
8
作者 郭书彩 李娜 徐瑞华 《河北大学学报(哲学社会科学版)》 CSSCI 北大核心 2015年第1期103-108,160,共6页
基于英美大学生作文语料库(LOCNESS)和中国英语学习者语料库(CLEC)调查中国英语学习者在写作中使用例证词"for example"的特点,研究发现,中国英语学习者无论在使用频数还是在用法上都具有明显区别于本族语者的特征。
关键词 语料库 例证词 for example LOCNESS CLEC
在线阅读 下载PDF
Auto-expanded multi query examples technology in content-based image retrieval 被引量:1
9
作者 王小玲 谢康林 《Journal of Southeast University(English Edition)》 EI CAS 2005年第3期287-292,共6页
In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image ... In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image used in traditional image retrieval into multi query examples so as to include more image features related with semantics.Retrieving images for each of the multi query examples and integrating the retrieval results,more relevant images can be obtained.The property of the recall-precision curve of a general retrieval algorithm and the K-means clustering method are used to realize the expansion according to the distance of image features of the initially retrieved images.The experimental results demonstrate that the AMQE technology can greatly improve the recall and precision of the original algorithms. 展开更多
关键词 content-based image retrieval SEMANTIC multi query examples K-means clustering
在线阅读 下载PDF
Generate Corresponding Image from Text Description Using Modified GAN-CLS Algorithm
10
作者 GONG Fuzhou XIA Zigeng 《Journal of Systems Science & Complexity》 2026年第1期410-431,共22页
Synthesizing images or texts automatically becomes a useful research area in the artificial intelligence nowadays.Generative adversarial networks(GANs),proposed by Goodfellow,et al.in 2014,make this task to be done mo... Synthesizing images or texts automatically becomes a useful research area in the artificial intelligence nowadays.Generative adversarial networks(GANs),proposed by Goodfellow,et al.in 2014,make this task to be done more efficiently by using deep neural networks(DNNs).The authors consider generating corresponding images from a single-sentence input text description using a GAN.Specifically,the authors analyze the GAN-CLS algorithm,which is a kind of advanced method of GAN proposed by Reed,et al.in 2016.In this paper the authors show the theoretical problem with this algorithm and correct it by modifying the objective function of the model.Experiments are performed on the Oxford-102 dataset and the CUB dataset to support the theoretical results.Since the proposed modification can be seen as an idea which can be used to improve all such kind of GAN models,the authors try two models,GAN-CLS and AttnGAN_(GPT).As a result,in both of the two models,the proposed modified algorithm is more stable and can generate images which are more plausible than the original algorithm.Also,some of the generated images match the input texts better,and the proposed modified algorithm has better performance on the quantitative indicators including FID and Inception Score.Finally,the authors propose some future application prospect of the modification idea,especially in the area of large language models. 展开更多
关键词 Deep learning generative adversarial networks negative examples text-to-image synthesis
原文传递
A New Example of Retrograde Solubility Model for Carbonate Rocks 被引量:2
11
作者 LIU Lihong WANG Chunlian +1 位作者 WANG Daming WANG Haida 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2017年第3期1145-1146,共2页
Objective The dissolution and precipitation of carbonate during burial diagenetic process controls the reservoir property in deep buried strata. The geological process related with it has become a research focus durin... Objective The dissolution and precipitation of carbonate during burial diagenetic process controls the reservoir property in deep buried strata. The geological process related with it has become a research focus during recent years. The most important dissolution fluids to carbonates are probably H2S and CO2 as byproducts of sulfate reduction in deep-buried setting with sulfate minerals, but carbonates are more soluble in relatively low temperature, which is the so-called retrograde solubility. Several geological processes can result in the decrease of temperature, including the upward migration of thermal fluids and tectonic uplift. 展开更多
关键词 of on with WELL A New example of Retrograde Solubility Model for Carbonate Rocks for
在线阅读 下载PDF
Digital image inpainting by example-based image synthesis method 被引量:1
12
作者 聂栋栋 Ma Lizhuang Xiao Shuangjiu 《High Technology Letters》 EI CAS 2006年第3期276-282,共7页
A simple and effective image inpainting method is proposed in this paper, which is proved to be suitable for different kinds of target regions with shapes from little scraps to large unseemly objects in a wide range o... A simple and effective image inpainting method is proposed in this paper, which is proved to be suitable for different kinds of target regions with shapes from little scraps to large unseemly objects in a wide range of images. It is an important improvement upon the traditional image inpainting techniques. By introducing a new bijeetive-mapping term into the matching cost function, the artificial repetition problem in the final inpainting image is practically solved. In addition, by adopting an inpainting error map, not only the target pixels are refined gradually during the inpainting process but also the overlapped target patches are combined more seamlessly than previous method. Finally, the inpainting time is dramatically decreased by using a new acceleration method in the matching process. 展开更多
关键词 INPAINTING image synthesis texture synthesis prority matching cost function example patch isophote DIFFUSION
在线阅读 下载PDF
Patterns of Clay Minerals Transformation in Clay Gouge, with Examples from Revers Fault Rocks in Devonina Niqiuhe Formation in The Dayangshu Basin 被引量:2
13
作者 MENG Jie LI Benxian +1 位作者 ZHANG Juncheng LIU Xiaoyang 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2017年第S1期59-60,共2页
The role of authigenic clay growth in clay gouge is increasingly recognized as a key to understanding the mechanics of berittle faulting and fault zone processes,including creep and seismogenesis,and providing new ins... The role of authigenic clay growth in clay gouge is increasingly recognized as a key to understanding the mechanics of berittle faulting and fault zone processes,including creep and seismogenesis,and providing new insights into the ongoing debate about the frictional strength of brittle fault(Haines and van der Pluijm,2012).However,neither the conditions nor the processes which 展开更多
关键词 with examples from Revers Fault Rocks in Devonina Niqiuhe Formation in The Dayangshu Basin Patterns of Clay Minerals Transformation in Clay Gouge
在线阅读 下载PDF
Characteristics of Authigenic Pyrite and its Sulfur Isotopes Influenced by Methane Seep--Taking the Core A at Site 79 of the Middle Okinawa Trough as an Example 被引量:1
14
作者 WANG Meng LI Qing +7 位作者 CAI Feng LIANG Jie YAN Guijing DONG Gang WANG Feng SHAO Hebin LUO Di CAO Yimin 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2017年第1期365-366,共2页
Objective Authigenic pyrite often develops extensively in marine sediments,which is an important product of sulfate reduction in an anoxic environment.It has a specific appearance and complicated sulfur isotopic prope... Objective Authigenic pyrite often develops extensively in marine sediments,which is an important product of sulfate reduction in an anoxic environment.It has a specific appearance and complicated sulfur isotopic properties,and acts as important evidence of methane seep in marine sediments.Strong AOM(anaerobic oxidation of methane)activity has developed in the Okinawa Trough. 展开更多
关键词 AOM Characteristics of Authigenic Pyrite and its Sulfur Isotopes Influenced by Methane Seep Taking the Core A at Site 79 of the Middle Okinawa Trough as an example
在线阅读 下载PDF
Adversarial Attacks on License Plate Recognition Systems 被引量:1
15
作者 Zhaoquan Gu Yu Su +5 位作者 Chenwei Liu Yinyu Lyu Yunxiang Jian Hao Li Zhen Cao Le Wang 《Computers, Materials & Continua》 SCIE EI 2020年第11期1437-1452,共16页
The license plate recognition system(LPRS)has been widely adopted in daily life due to its efficiency and high accuracy.Deep neural networks are commonly used in the LPRS to improve the recognition accuracy.However,re... The license plate recognition system(LPRS)has been widely adopted in daily life due to its efficiency and high accuracy.Deep neural networks are commonly used in the LPRS to improve the recognition accuracy.However,researchers have found that deep neural networks have their own security problems that may lead to unexpected results.Specifically,they can be easily attacked by the adversarial examples that are generated by adding small perturbations to the original images,resulting in incorrect license plate recognition.There are some classic methods to generate adversarial examples,but they cannot be adopted on LPRS directly.In this paper,we modify some classic methods to generate adversarial examples that could mislead the LPRS.We conduct extensive evaluations on the HyperLPR system and the results show that the system could be easily attacked by such adversarial examples.In addition,we show that the generated images could also attack the black-box systems;we show some examples that the Baidu LPR system also makes incorrect recognitions.We hope this paper could help improve the LPRS by realizing the existence of such adversarial attacks. 展开更多
关键词 License plate recognition system adversarial examples deep neural networks
在线阅读 下载PDF
A Survey on Adversarial Examples in Deep Learning 被引量:3
16
作者 Kai Chen Haoqi Zhu +1 位作者 Leiming Yan Jinwei Wang 《Journal on Big Data》 2020年第2期71-84,共14页
Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial ex... Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples.This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples,the occurrences of the adversarial examples,the attacking methods of adversarial examples.This article lists the possible reasons for the adversarial examples.This article also analyzes several typical generation methods of adversarial examples in detail:Limited-memory BFGS(L-BFGS),Fast Gradient Sign Method(FGSM),Basic Iterative Method(BIM),Iterative Least-likely Class Method(LLC),etc.Furthermore,in the perspective of the attack methods and reasons of the adversarial examples,the main defense techniques for the adversarial examples are listed:preprocessing,regularization and adversarial training method,distillation method,etc.,which application scenarios and deficiencies of different defense measures are pointed out.This article further discusses the application of adversarial examples which currently is mainly used in adversarial evaluation and adversarial training.Finally,the overall research direction of the adversarial examples is prospected to completely solve the adversarial attack problem.There are still a lot of practical and theoretical problems that need to be solved.Finding out the characteristics of the adversarial examples,giving a mathematical description of its practical application prospects,exploring the universal method of adversarial example generation and the generation mechanism of the adversarial examples are the main research directions of the adversarial examples in the future. 展开更多
关键词 Adversarial examples generation methods defense methods
在线阅读 下载PDF
for example还是such as 被引量:1
17
作者 戴卫红 《英语辅导(高中年级)》 2001年第8期11-11,共1页
高中学生写英语作文时,常需列举事物,如:
关键词 高中 英语 写作指导 “for example “such as” 用法
在线阅读 下载PDF
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
18
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
在线阅读 下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部