期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Variable reward function-driven strategies for impulsive orbital attack-defense games under multiple constraints and victory conditions
1
作者 Liran Zhao Sihan Xu +1 位作者 Qinbo Sun Zhaohui Dang 《Defence Technology(防务技术)》 2025年第9期159-183,共25页
This paper investigates impulsive orbital attack-defense(AD)games under multiple constraints and victory conditions,involving three spacecraft:attacker,target,and defender.In the AD scenario,the attacker aims to breac... This paper investigates impulsive orbital attack-defense(AD)games under multiple constraints and victory conditions,involving three spacecraft:attacker,target,and defender.In the AD scenario,the attacker aims to breach the defender's interception to rendezvous with the target,while the defender seeks to protect the target by blocking or actively pursuing the attacker.Four different maneuvering constraints and five potential game outcomes are incorporated to more accurately model AD game problems and increase complexity,thereby reducing the effectiveness of traditional methods such as differential games and game-tree searches.To address these challenges,this study proposes a multiagent deep reinforcement learning solution with variable reward functions.Two attack strategies,Direct attack(DA)and Bypass attack(BA),are developed for the attacker,each focusing on different mission priorities.Similarly,two defense strategies,Direct interdiction(DI)and Collinear interdiction(CI),are designed for the defender,each optimizing specific defensive actions through tailored reward functions.Each reward function incorporates both process rewards(e.g.,distance and angle)and outcome rewards,derived from physical principles and validated via geometric analysis.Extensive simulations of four strategy confrontations demonstrate average defensive success rates of 75%for DI vs.DA,40%for DI vs.BA,80%for CI vs.DA,and 70%for CI vs.BA.Results indicate that CI outperforms DI for defenders,while BA outperforms DA for attackers.Moreover,defenders achieve their objectives more effectively under identical maneuvering capabilities.Trajectory evolution analyses further illustrate the effectiveness of the proposed variable reward function-driven strategies.These strategies and analyses offer valuable guidance for practical orbital defense scenarios and lay a foundation for future multi-agent game research. 展开更多
关键词 Orbital attack-defense game Impulsive maneuver Multi-agent deep reinforcement learning Reward function design
在线阅读 下载PDF
Evolution Analysis of Network Attack and Defense Situation Based on Game Theory
2
作者 Haiyan Sun Chenglong Shao +2 位作者 Jianwei Zhang Kun Wang Wanwei Huang 《Computers, Materials & Continua》 2025年第4期1475-1494,共20页
To address the problem that existing studies lack analysis of the relationship between attack-defense game behaviors and situation evolution from the game perspective after constructing an attack-defense model,this pa... To address the problem that existing studies lack analysis of the relationship between attack-defense game behaviors and situation evolution from the game perspective after constructing an attack-defense model,this paper proposes a network attack-defense game model(ADGM).Firstly,based on the assumption of incomplete information between the two sides of the game,the ADGM model is established,and methods of payoff quantification,equilibrium solution,and determination of strategy confrontation results are presented.Then,drawing on infectious disease dynamics,the network attack-defense situation is defined based on the density of nodes in various security states,and the transition paths of network node security states are analyzed.Finally,the network zero-day virus attack-defense behaviors are analyzed,and comparative experiments on the attack-defense evolution trends under the scenarios of different strategy combinations,interference methods,and initial numbers are conducted using the NetLogo simulation tool.The experimental results indicate that this model can effectively analyze the evolution of the macro-level network attack-defense situation from the micro-level attack-defense behaviors.For instance,in the strategy selection experiment,when the attack success rate decreases from 0.49 to 0.29,the network destruction rate drops by 11.3%,in the active defense experiment,when the interference coefficient is reduced from 1 to 0.7,the network destruction rate decreases by 7%,and in the initial node number experiment,when the number of initially infected nodes increases from 10 to 30,the network destruction rate rises by 3%. 展开更多
关键词 Network attack-defense situation evolution zero-day virus NETLOGO
在线阅读 下载PDF
Enhanced Chimp Optimization Algorithm Using Attack Defense Strategy and Golden Update Mechanism for Robust COVID-19 Medical Image Segmentation
3
作者 Amir Hamza Morad Grimes +1 位作者 Abdelkrim Boukabou Samira Dib 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第4期2086-2109,共24页
Medical image segmentation is a powerful and evolving technology in medical diagnosis.In fact,it has been identified as a very effective tool to support and accompany doctors in their fight against the spread of the c... Medical image segmentation is a powerful and evolving technology in medical diagnosis.In fact,it has been identified as a very effective tool to support and accompany doctors in their fight against the spread of the coronavirus(COVID-19).Various techniques have been utilized for COVID-19 image segmentation,including Multilevel Thresholding(MLT)-based meta-heuristics,which are considered crucial in addressing this issue.However,despite their importance,meta-heuristics have significant limitations.Specifically,the imbalance between exploration and exploitation,as well as premature convergence,can cause the optimization process to become stuck in local optima,resulting in unsatisfactory segmentation results.In this paper,an enhanced War Strategy Chimp Optimization Algorithm(WSChOA)is proposed to address MLT problems.Two strategies are incorporated into the traditional Chimp Optimization Algorithm.Golden update mechanism that provides diversity in the population.Additionally,the attack and defense strategies are incorporated to improve the search space leading to avoiding local optima.The experimental results were conducted by comparing WSChoA with recent and well-known algorithms using various evaluation metrics such as Feature Similarity Index(FSIM),Structural Similarity Index(SSIM),Peak signal-to-Noise Ratio(PSNR),Standard deviation(STD),Freidman Test(FT),and Wilcoxon Sign Rank Test(WSRT).The results obtained by WSChoA surpassed those of other optimization techniques in terms of robustness and accuracy,indicating that it is a powerful tool for image segmentation. 展开更多
关键词 Image processing Segmentation Optimization Chimp Golden update mechanism attack-defense strategy COVID-19
在线阅读 下载PDF
Network Defense Decision-Making Based on Deep Reinforcement Learning and Dynamic Game Theory
4
作者 Huang Wanwei Yuan Bo +2 位作者 Wang Sunan Ding Yi Li Yuhua 《China Communications》 SCIE CSCD 2024年第9期262-275,共14页
Existing researches on cyber attackdefense analysis have typically adopted stochastic game theory to model the problem for solutions,but the assumption of complete rationality is used in modeling,ignoring the informat... Existing researches on cyber attackdefense analysis have typically adopted stochastic game theory to model the problem for solutions,but the assumption of complete rationality is used in modeling,ignoring the information opacity in practical attack and defense scenarios,and the model and method lack accuracy.To such problem,we investigate network defense policy methods under finite rationality constraints and propose network defense policy selection algorithm based on deep reinforcement learning.Based on graph theoretical methods,we transform the decision-making problem into a path optimization problem,and use a compression method based on service node to map the network state.On this basis,we improve the A3C algorithm and design the DefenseA3C defense policy selection algorithm with online learning capability.The experimental results show that the model and method proposed in this paper can stably converge to a better network state after training,which is faster and more stable than the original A3C algorithm.Compared with the existing typical approaches,Defense-A3C is verified its advancement. 展开更多
关键词 A3C cyber attack-defense analysis deep reinforcement learning stochastic game theory
在线阅读 下载PDF
Calculation of the Behavior Utility of a Network System: Conception and Principle 被引量:5
5
作者 Changzhen Hu 《Engineering》 2018年第1期78-84,共7页
The service and application of a network is a behavioral process that is oriented toward its operations and tasks, whose metrics and evaluation are still somewhat of a rough comparison, This paper describes sce- nes o... The service and application of a network is a behavioral process that is oriented toward its operations and tasks, whose metrics and evaluation are still somewhat of a rough comparison, This paper describes sce- nes of network behavior as differential manifolds, Using the homeomorphic transformation of smooth differential manifolds, we provide a mathematical definition of network behavior and propose a mathe- matical description of the network behavior path and behavior utility, Based on the principle of differen- tial geometry, this paper puts forward the function of network behavior and a calculation method to determine behavior utility, and establishes the calculation principle of network behavior utility, We also provide a calculation framework for assessment of the network's attack-defense confrontation on the strength of behavior utility, Therefore, this paper establishes a mathematical foundation for the objective measurement and precise evaluation of network behavior, 展开更多
关键词 NETWORK metric evaluation Differential MANIFOLD NETWORK BEHAVIOR UTILITY NETWORK attack-defense CONFRONTATION
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部