期刊文献+
共找到45,773篇文章
< 1 2 250 >
每页显示 20 50 100
A Novel Unsupervised Structural Attack and Defense for Graph Classification
1
作者 Yadong Wang Zhiwei Zhang +2 位作者 Pengpeng Qiao Ye Yuan Guoren Wang 《Computers, Materials & Continua》 2026年第1期1761-1782,共22页
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev... Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations. 展开更多
关键词 Graph classification graph neural networks adversarial attack
在线阅读 下载PDF
A Dynamic Deceptive Defense Framework for Zero-Day Attacks in IIoT:Integrating Stackelberg Game and Multi-Agent Distributed Deep Deterministic Policy Gradient
2
作者 Shigen Shen Xiaojun Ji Yimeng Liu 《Computers, Materials & Continua》 2025年第11期3997-4021,共25页
The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address th... The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios. 展开更多
关键词 Industrial internet of things zero-day attacks Stackelberg game distributed deep deterministic policy gradient defensive spoofing dynamic defense
在线阅读 下载PDF
A Survey of Adversarial Examples in Computer Vision:Attack,Defense,and Beyond
3
作者 XU Keyizhi LU Yajuan +1 位作者 WANG Zhongyuan LIANG Chao 《Wuhan University Journal of Natural Sciences》 2025年第1期1-20,共20页
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca... Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field. 展开更多
关键词 computer vision adversarial examples adversarial attack adversarial defense
原文传递
Optimal Secure Control of Networked Control Systems Under False Data Injection Attacks:A Multi-Stage Attack-Defense Game Approach
4
作者 Dajun Du Yi Zhang +1 位作者 Baoyue Xu Minrui Fei 《IEEE/CAA Journal of Automatica Sinica》 2025年第4期821-823,共3页
Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by de... Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by designing defense strategy on the basis of identifying attack strategy,maintaining stable operation of NCSs.To solve this attack-defense game problem,this letter investigates optimal secure control of NCSs under FDIAs.First,for the alterations of energy caused by false data,a novel attack-defense game model is constructed,which considers the changes of energy caused by the actions of the defender and attacker in the forward and feedback channels. 展开更多
关键词 designing defense strategy networked control systems ncss alterations energy networked control systems false data injection attacks fdias strategywhile false data injection attacks optimal secure control identifying attack strategymaintaining
在线阅读 下载PDF
A survey of backdoor attacks and defenses:From deep neural networks to large language models
5
作者 Ling-Xin Jin Wei Jiang +5 位作者 Xiang-Yu Wen Mei-Yu Lin Jin-Yu Zhan Xing-Zhi Zhou Maregu Assefa Habtie Naoufel Werghi 《Journal of Electronic Science and Technology》 2025年第3期13-35,共23页
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce... Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research. 展开更多
关键词 Backdoor attacks Backdoor defenses Deep neural networks Large language model
在线阅读 下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:23
6
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning Deep neural network Adversarial example Adversarial attack Adversarial defense
在线阅读 下载PDF
Primary User Adversarial Attacks on Deep Learning-Based Spectrum Sensing and the Defense Method 被引量:4
7
作者 Shilian Zheng Linhui Ye +5 位作者 Xuanye Wang Jinyin Chen Huaji Zhou Caiyi Lou Zhijin Zhao Xiaoniu Yang 《China Communications》 SCIE CSCD 2021年第12期94-107,共14页
The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the rob... The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model. 展开更多
关键词 spectrum sensing cognitive radio deep learning adversarial attack autoencoder defense
在线阅读 下载PDF
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review 被引量:30
8
作者 Han Xu Yao Ma +4 位作者 Hao-Chen Liu Debayan Deb Hui Liu Ji-Liang Tang Anil K.Jain 《International Journal of Automation and computing》 EI CSCD 2020年第2期151-178,共28页
Deep neural networks(DNN)have achieved unprecedented success in numerous machine learning tasks in various domains.However,the existence of adversarial examples raises our concerns in adopting deep learning to safety-... Deep neural networks(DNN)have achieved unprecedented success in numerous machine learning tasks in various domains.However,the existence of adversarial examples raises our concerns in adopting deep learning to safety-critical applications.As a result,we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types,such as images,graphs and text.Thus,it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures.In this survey,we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples,for three most popular data types,including images,graphs and text. 展开更多
关键词 Adversarial EXAMPLE model safety ROBUSTNESS defenseS deep learning
原文传递
Adversarial attacks and defenses for digital communication signals identification 被引量:2
9
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model Adversarial attacks Adversarial defenses Adversarial indicators
在线阅读 下载PDF
Cooperative differential games guidance laws for multiple attackers against an active defense target 被引量:16
10
作者 Fei LIU Xiwang DONG +1 位作者 Qingdong LI Zhang REN 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2022年第5期374-389,共16页
This paper is concerned with a scenario of multiple attackers trying to intercept a target with active defense.Three types of agents are considered in the guidance:The multiple attackers,the target and the defender,wh... This paper is concerned with a scenario of multiple attackers trying to intercept a target with active defense.Three types of agents are considered in the guidance:The multiple attackers,the target and the defender,where the attackers aim to pursuit the target from different directions and evade from the defender simultaneously.The guidance engagement is formulated in the framework of a zero-sum two-person differential game between the two opposing teams,such that the measurements on the maneuver of the target or estimations on the defending strategy of the defender can be absent.Cooperation of the attackers resides in two aspects:redundant interception under the threat of the defender and the relative intercept geometry with the target.The miss distances,the relative intercept angle errors and the costs of the agents are combined into a single performance index of the game.Such formulation enables a unitary approach to the design of guidance laws for the agents.To minimize the control efforts and miss distances for the attackers,an optimization method is proposed to find the best anticipated miss distances to the defender under the constraint that the defender is endowed with a capture radius.Numerical simulations with two cases are conducted to illustrate the effectiveness of the proposed cooperative guidance law. 展开更多
关键词 Active defense Cooperative guidance Differential games Minimum effort optimization Relative intercept angle
原文传递
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks 被引量:1
11
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 Adversarial attacks GAN-based adversarial defense image classification models adversarial defense
在线阅读 下载PDF
Multi-stage attack weapon target allocation method based on defense area analysis 被引量:13
12
作者 JIA Zhengrong LU Faxing WANG Hangyu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第3期539-550,共12页
For better reflecting the interactive defense between targets in practical combat scenarios,the basic weapon-target allocation(WTA)framework needs to be improved.A multi-stage attack WTA method is proposed.First,a def... For better reflecting the interactive defense between targets in practical combat scenarios,the basic weapon-target allocation(WTA)framework needs to be improved.A multi-stage attack WTA method is proposed.First,a defense area analysis is presented according to the targets’positions and the radii of the defense areas to analyze the interactive coverage and protection between targets’defense areas.Second,with the coverage status and coverage layer number,a multi-stage attack planning method is proposed and the multi-stage attack objective function model is established.Simulation is conducted with interactive defense combat scenarios,the traditional WTA method and the multi-stage WTA method are compared,and the objective function model is validated with the Monte-Carlo method.The results suggest that if the combat scenario involves interactive coverage of targets’defense areas,it is imperative to analyze the defense areas and apply the multi-stage attack method to weakening the target defense progressively for better combat effectiveness. 展开更多
关键词 weapon-target allocation(WTA) defense area analysis combat effective analysis
在线阅读 下载PDF
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks 被引量:1
13
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
在线阅读 下载PDF
Exogenous melatonin enhances heat stress tolerance in sweetpotato by modulating antioxidant defense system,osmotic homeostasis and stomatal traits 被引量:1
14
作者 Sunjeet Kumar Rui Yu +5 位作者 Yang Liu Yi Liu Mohammad Nauman Khan Yonghua Liu Mengzhao Wang Guopeng Zhu 《Horticultural Plant Journal》 2025年第1期431-445,共15页
Heat stress hinders the growth and productivity of sweetpotato plants,predominantly through oxidative damage to cellular membranes.Therefore,the development of efficient approaches for mitigating heat-related impairme... Heat stress hinders the growth and productivity of sweetpotato plants,predominantly through oxidative damage to cellular membranes.Therefore,the development of efficient approaches for mitigating heat-related impairments is essential for the long-term production of sweetpotatoes.Melatonin has been recognised for its capacity to assist plants in dealing with abiotic stress conditions.This research aimed to investigate how different doses of exogenous melatonin influence heat damage in sweetpotato plants.Heat stress drastically affected shoot and root fresh weight by 31.8 and 44.5%,respectively.This reduction resulted in oxidative stress characterised by increased formation of hydrogen peroxide(H_(2)O_(2))by 804.4%,superoxide ion(O_(2)^(·-))by 211.5%and malondialdehyde(MDA)by 234.2%.Heat stress also reduced chlorophyll concentration,photosystemⅡefficiency(F_v/F_m)by 15.3%and gaseous exchange.However,pre-treatment with 100μmol L^(-1)melatonin increased growth and reduced oxidative damage to sweetpotato plants under heat stress.In particular,melatonin decreased H_(2)O_(2),O_(2)^(·-)and MDA by 64.8%,42.7%and 38.2%,respectively.Melatonin also mitigated the decline in chlorophyll levels and improved stomatal traits,gaseous exchange and F_(v)/F_(m)(13%).Results suggested that the favorable outcomes of melatonin treatment can be associated with elevated antioxidant enzyme activity and an increase in non-enzymatic antioxidants and osmo-protectants.Overall,these findings indicate that exogenous melatonin can improve heat stress tolerance in sweetpotatoes.This stu dy will assist re searchers in further investigating how melatonin makes sweetpotatoes more resistant to heat stress. 展开更多
关键词 SWEETPOTATO Heat stress MELATONIN Oxidative damage Antioxidant defense system Stomatal traits
在线阅读 下载PDF
Attacks Against Cross-Chain Systems and Defense Approaches:A Contemporary Survey 被引量:7
15
作者 Li Duan Yangyang Sun +3 位作者 Wei Ni Weiping Ding Jiqiang Liu Wei Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第8期1647-1667,共21页
The blockchain cross-chain is a significant technology for inter-chain interconnection and value transfer among different blockchain networks.Cross-chain overcomes the“information island”problem of the closed blockc... The blockchain cross-chain is a significant technology for inter-chain interconnection and value transfer among different blockchain networks.Cross-chain overcomes the“information island”problem of the closed blockchain network and is increasingly applied to multiple critical areas such as finance and the internet of things(IoT).Blockchain can be divided into three main categories of blockchain networks:public blockchains,private blockchains,and consortium blockchains.However,there are differences in block structures,consensus mechanisms,and complex working mechanisms among heterogeneous blockchains.The fragility of the cross-chain system itself makes the cross-chain system face some potential security and privacy threats.This paper discusses security defects on the cross-chain implementation mechanism,and discusses the impact of the structural features of blockchain networks on cross-chain security.In terms of cross-chain intercommunication,a cross-chain attack can be divided into a multi-chain combination attack,native chain attack,and inter-chain attack diffusion.Then various security threats and attack paths faced by the cross-chain system are analyzed.At last,the corresponding security defense methods of cross-chain security threats and future research directions for cross-chain applications are put forward. 展开更多
关键词 Blockchain cross-chain defense distributed private key control hash-locking NOTARY security threats sidechain/relay
在线阅读 下载PDF
Moving target defense of routing randomization with deep reinforcement learning against eavesdropping attack 被引量:5
16
作者 Xiaoyu Xu Hao Hu +3 位作者 Yuling Liu Jinglei Tan Hongqi Zhang Haotian Song 《Digital Communications and Networks》 SCIE CSCD 2022年第3期373-387,共15页
Eavesdropping attacks have become one of the most common attacks on networks because of their easy implementation. Eavesdropping attacks not only lead to transmission data leakage but also develop into other more harm... Eavesdropping attacks have become one of the most common attacks on networks because of their easy implementation. Eavesdropping attacks not only lead to transmission data leakage but also develop into other more harmful attacks. Routing randomization is a relevant research direction for moving target defense, which has been proven to be an effective method to resist eavesdropping attacks. To counter eavesdropping attacks, in this study, we analyzed the existing routing randomization methods and found that their security and usability need to be further improved. According to the characteristics of eavesdropping attacks, which are “latent and transferable”, a routing randomization defense method based on deep reinforcement learning is proposed. The proposed method realizes routing randomization on packet-level granularity using programmable switches. To improve the security and quality of service of legitimate services in networks, we use the deep deterministic policy gradient to generate random routing schemes with support from powerful network state awareness. In-band network telemetry provides real-time, accurate, and comprehensive network state awareness for the proposed method. Various experiments show that compared with other typical routing randomization defense methods, the proposed method has obvious advantages in security and usability against eavesdropping attacks. 展开更多
关键词 Routing randomization Moving target defense Deep reinforcement learning Deep deterministic policy gradient
在线阅读 下载PDF
Replay Attack and Defense of Electric Vehicle Charging on GB/T 27930-2015 Communication Protocol 被引量:2
17
作者 Yafei Li Yong Wang +1 位作者 Min Wu Haiming Li 《Journal of Computer and Communications》 2019年第12期20-30,共11页
The GB/T 27930-2015 protocol is the communication protocol between the non-vehicle-mounted charger and the battery management system (BMS) stipulated by the state. However, as the protocol adopts the way of broadcast ... The GB/T 27930-2015 protocol is the communication protocol between the non-vehicle-mounted charger and the battery management system (BMS) stipulated by the state. However, as the protocol adopts the way of broadcast communication and plaintext to transmit data, the data frame does not contain the source address and the destination address, making the Electric Vehicle (EV) vulnerable to replay attack in the charging process. In order to verify the security problems of the protocol, this paper uses 27,655 message data in the complete charging process provided by Shanghai Thaisen electric company, and analyzes these actual data frames one by one with the program written by C++. In order to enhance the security of the protocol, Rivest-Shamir-Adleman (RSA) digital signature and adding random numbers are proposed to resist replay attack. Under the experimental environment of Eclipse, the normal charging of electric vehicles, RSA digital signature and random number defense are simulated. Experimental results show that RSA digital signature cannot resist replay attack, and adding random numbers can effectively enhance the ability of EV to resist replay attack during charging. 展开更多
关键词 EV CHARGING GB/T 27930-2015 REPLAY attack RSA Digital SIGNATURE Adding Random NUMBERS
在线阅读 下载PDF
Address Resolution Protocol (ARP): Spoofing Attack and Proposed Defense 被引量:1
18
作者 Ghazi Al Sukkar Ramzi Saifan +2 位作者 Sufian Khwaldeh Mahmoud Maqableh Iyad Jafar 《Communications and Network》 2016年第3期118-130,共13页
Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the ... Networks have become an integral part of today’s world. The ease of deployment, low-cost and high data rates have contributed significantly to their popularity. There are many protocols that are tailored to ease the process of establishing these networks. Nevertheless, security-wise precautions were not taken in some of them. In this paper, we expose some of the vulnerability that exists in a commonly and widely used network protocol, the Address Resolution Protocol (ARP) protocol. Effectively, we will implement a user friendly and an easy-to-use tool that exploits the weaknesses of this protocol to deceive a victim’s machine and a router through creating a sort of Man-in-the-Middle (MITM) attack. In MITM, all of the data going out or to the victim machine will pass first through the attacker’s machine. This enables the attacker to inspect victim’s data packets, extract valuable data (like passwords) that belong to the victim and manipulate these data packets. We suggest and implement a defense mechanism and tool that counters this attack, warns the user, and exposes some information about the attacker to isolate him. GNU/Linux is chosen as an operating system to implement both the attack and the defense tools. The results show the success of the defense mechanism in detecting the ARP related attacks in a very simple and efficient way. 展开更多
关键词 Address Resolution Protocol ARP Spoofing Security attack and defense Man in the Middle attack
在线阅读 下载PDF
Hadoop Based Defense Solution to Handle Distributed Denial of Service (DDoS) Attacks 被引量:2
19
作者 Shweta Tripathi Brij Gupta +2 位作者 Ammar Almomani Anupama Mishra Suresh Veluru 《Journal of Information Security》 2013年第3期150-164,共15页
Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of ... Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of flooding the victim network with an enormous number of packets, hence exhausting the resources and preventing the legitimate users to access them. After having standard DDoS defense mechanism, still attackers are able to launch an attack. These inadequate defense mechanisms need to be improved and integrated with other solutions. The purpose of this paper is to study the characteristics of DDoS attacks, various models involved in attacks and to provide a timeline of defense mechanism with their improvements to combat DDoS attacks. In addition to this, a novel scheme is proposed to detect DDoS attack efficiently by using MapReduce programming model. 展开更多
关键词 DDOS DoS defense Mechanism Characteristics HADOOP MAPREDUCE
在线阅读 下载PDF
Defense Against Poisoning Attack via Evaluating TrainingSamples Using Multiple Spectral Clustering Aggregation Method 被引量:2
20
作者 Wentao Zhao Pan Li +2 位作者 Chengzhang Zhu Dan Liu Xiao Liu 《Computers, Materials & Continua》 SCIE EI 2019年第6期817-832,共16页
The defense techniques for machine learning are critical yet challenging due tothe number and type of attacks for widely applied machine learning algorithms aresignificantly increasing. Among these attacks, the poison... The defense techniques for machine learning are critical yet challenging due tothe number and type of attacks for widely applied machine learning algorithms aresignificantly increasing. Among these attacks, the poisoning attack, which disturbsmachine learning algorithms by injecting poisoning samples, is an attack with the greatestthreat. In this paper, we focus on analyzing the characteristics of positioning samples andpropose a novel sample evaluation method to defend against the poisoning attack cateringfor the characteristics of poisoning samples. To capture the intrinsic data characteristicsfrom heterogeneous aspects, we first evaluate training data by multiple criteria, each ofwhich is reformulated from a spectral clustering. Then, we integrate the multipleevaluation scores generated by the multiple criteria through the proposed multiplespectral clustering aggregation (MSCA) method. Finally, we use the unified score as theindicator of poisoning attack samples. Experimental results on intrusion detection datasets show that MSCA significantly outperforms the K-means outlier detection in terms ofdata legality evaluation and poisoning attack detection. 展开更多
关键词 Poisoning attack sample evaluation spectral clustering ensemble learning.
暂未订购
上一页 1 2 250 下一页 到第
使用帮助 返回顶部