GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieve...GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.展开更多
Time synchronization is a prerequisite for ensuring determinism in time-sensitive networking(TSN).While time synchronization errors cannot be overlooked,pursuing minimal time errors may incur unnecessary costs.Using c...Time synchronization is a prerequisite for ensuring determinism in time-sensitive networking(TSN).While time synchronization errors cannot be overlooked,pursuing minimal time errors may incur unnecessary costs.Using complex network theory,this study proposes a hierarchy for TSN and introduces the concept of bounded time error.A coupling model between traffic scheduling and time synchronization is established,deriving functional relationships among end-to-end delay,delay jitter,gate window,and time error.These relationships illustrate that time errors can trigger jumps in delay and delay jitter.To evaluate different time errors impact on traffic scheduling performance,an end-to-end transmission experiment scheme is designed,along with the construction of a TSN test platform implementing two representative cases.Case A is a closed TSN domain scenario with pure TSN switches emulating closed factory floor network.Case B depicts remote factory interconnection where TSN domains link via non-TSN domains composed of OpenFlow switches.Results from Case A show that delay and delay jitter on a single node are most significantly affected by time errors,up to one gating cycle.End-to-end delay jitter tends to increase with the number of hops.When the ratio of time error bound to window exceeds 10%,the number of schedulable traffic flows decreases rapidly.Case B reveals that when time error is below 1μs,the number of schedulable traffic flows begins to increase significantly,approaching full schedulability at errors below 0.6μs.展开更多
The rise in construction activities within mountainous regions has significantly increased the frequency of rockfalls.Statistical models for rockfall hazard assessment often struggle to achieve high precision on a lar...The rise in construction activities within mountainous regions has significantly increased the frequency of rockfalls.Statistical models for rockfall hazard assessment often struggle to achieve high precision on a large scale.This limitation arises primarily from the scarcity of historical rockfall data and the inadequacy of conventional assessment indicators in capturing the physical and structural characteristics of rockfalls.This study proposes a physically based deterministic model designed to accurately quantify rockfall hazards at a large scale.The model accounts for multiple rockfall failure modes and incorporates the key physical and structural parameters of the rock mass.Rockfall hazard is defined as the product of three factors:the rockfall failure probability,the probability of reaching a specific position,and the corresponding impact intensity.The failure probability includes probabilities of formation and instability of rock blocks under different failure modes,modeled based on the combination patterns of slope surfaces and rock discontinuities.The Monte Carlo method is employed to account for the randomness of mechanical and geometric parameters when quantifying instability probabilities.Additionally,the rock trajectories and impact energies simulated using Flow-R software are combined with rockfall failure probability to enable regional rockfall hazard zoning.A case study was conducted in Tiefeng,Chongqing,China,considering four types of rockfall failure modes.Hazard zoning results identified the steep and elevated terrains of the northern and southern anaclinal slopes as areas of highest rockfall hazard.These findings align with observed conditions,providing detailed hazard zoning and validating the effectiveness and potential of the proposed model.展开更多
The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address th...The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios.展开更多
针对6G时代空天地一体化网络架构中空基网络无人集群对时延敏感型业务提出的确定性传输需求,提出一种基于跨层优化的确定性无人集群网络协议——LPCO(link-scheduling and path-planning based on cross-layer optimization)。该协议在...针对6G时代空天地一体化网络架构中空基网络无人集群对时延敏感型业务提出的确定性传输需求,提出一种基于跨层优化的确定性无人集群网络协议——LPCO(link-scheduling and path-planning based on cross-layer optimization)。该协议在中心节点控制的集中式架构下,融合链路状态感知、业务需求感知、显式路径规划和周期时隙调度等关键模块,实现了网络层路径选择与MAC层时隙分配的联合调度,构建了端到端的跨层协同优化机制。具体方法包括:在中心控制节点完成对全网拓扑感知后,通过基于链路过期时间(link expiration time, LET)进行链路预测,利用QoS加权目标函数实现路径规划,结合TDMA对路径节点动态分配时隙,确保关键业务在无人集群网络下的低延迟传输和高可靠性。通过在NS3仿真平台中与典型路由协议OLSR和DSDV对比验证,仿真实验结果表明:LPCO相较OLSR与DSDV协议,时敏业务的平均端到端时延分别降低了43.6%、40.7%;时敏业务的分组投递率提升了69.3%、73.5%;瞬时时延抖动控制在3 ms以内,部分数据包抖动接近0,在无人集群网络中展现出确定性服务保障能力与鲁棒性。展开更多
Deterministic optimization methods are combined with the Pareto front concept to solve multi-criterion design problems. The algorithm and the numerical implementation are applied to aerodynamic designs. Evolutionary a...Deterministic optimization methods are combined with the Pareto front concept to solve multi-criterion design problems. The algorithm and the numerical implementation are applied to aerodynamic designs. Evolutionary algorithms (EAs) and the Pareto front concept are used to solve practical design problems in industry for its robustness in capturing convex, concave, discrete or discontinuous Pareto fronts of multi-objective optimization problems. However, the process is time-consuming. Therefore, deterministic optimization methods are introduced to capture the Pareto front, and the types of the captured Pareto front are explained. Numerical experiments show that the deterministic optimization method is a good alternative to EAs for capturing any convex and some concave Pareto fronts in multi-criterion aerodynamic optimization problems due to its efficiency.展开更多
针对柔性直流输电系统(voltage source converter based high voltage direct current transmission,VSC-HVDC)控制参数设计过程中存在的鲁棒性差、依赖已知电路参数、工程设计经验化等问题,提出一种基于马尔科夫转换场(Markov transiti...针对柔性直流输电系统(voltage source converter based high voltage direct current transmission,VSC-HVDC)控制参数设计过程中存在的鲁棒性差、依赖已知电路参数、工程设计经验化等问题,提出一种基于马尔科夫转换场(Markov transition field,MTF)与深度确定性策略梯度算法(deep deterministic policy gradient,DDPG)结合的鲁棒性强、不依赖电路参数特性以及可视化的VSC-HVDC控制参数优化设计方法。首先,采用马尔科夫转换场将电路功率、电压等一维时序波形数据转换为二维马尔科夫转换场域图像并使用马尔科夫转换场损失函数(Markov transition field loss,MTFL)判断二维转换域图的数据波动性;其次,将MTFL损失函数与DDPG算法相结合,综合利用MTFL损失函数对系统输出时序数据动态特性评价能力更强的优点和DDPG算法泛化性能优秀的特点,实现VSC-HVDC系统控制参数优化;最后,通过MATLAB模拟和实验结果验证该方法的有效性。展开更多
Hepatitis B Virus(HBV)infection and heavy alcohol consumption are the two primary pathogenic causes of liver cirrhosis.In this paper,we proposed a deterministic mathematical model and a logistic equation to investigat...Hepatitis B Virus(HBV)infection and heavy alcohol consumption are the two primary pathogenic causes of liver cirrhosis.In this paper,we proposed a deterministic mathematical model and a logistic equation to investigate the dynamics of liver cirrhosis progression as well as to explain the implications of variations in alcohol consumption on chronic hepatitis B patients,respectively.The intricate interactions between liver cirrhosis,recovery,and treatment dynamics are captured by the model.This study aims to show that alcohol consumption by Hepatitis B-infected individuals accelerates liver cirrhosis progression while treatment of acutely infected individuals reduces it.We proved that a unique solution of the proposed model exists,which is positive and bounded.Using the next-generation matrix approach,two basic reproductive numbers R_(A_(0))and R_(A_(max))are calculated to identify future recurrence.The equilibrium points are calculated,and both equilibria are proved locally and globally asymptotically stable when R_(0)is below and above one,respectively.It is shown that bifurcation exists at R_(0)=1 and a detailed proof for forward bifurcation is given.Furthermore,we performed the sensitivity analysis of the model parameters on R_(0).For the confirmation of analytical work,we performed numerical simulations,and the results indicate that the treatment and the inhibitory effects reduce the risk of developing liver cirrhosis in individuals,while heavy alcohol consumption accelerates markedly the liver cirrhosis progression in patients with chronic hepatitis B.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.42404017,42122025 and 42174030).
文摘GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.
基金supported in part by the Science and Technology Research and Development Foundation of China Academy of Railway Sciences Corporation Limited(Grant No.2023YJ364)in part by National Key R&D Program of China(Grant No.2022YFC3803700)in part by the project of Beijing Laboratory of Advanced Information Networks.
文摘Time synchronization is a prerequisite for ensuring determinism in time-sensitive networking(TSN).While time synchronization errors cannot be overlooked,pursuing minimal time errors may incur unnecessary costs.Using complex network theory,this study proposes a hierarchy for TSN and introduces the concept of bounded time error.A coupling model between traffic scheduling and time synchronization is established,deriving functional relationships among end-to-end delay,delay jitter,gate window,and time error.These relationships illustrate that time errors can trigger jumps in delay and delay jitter.To evaluate different time errors impact on traffic scheduling performance,an end-to-end transmission experiment scheme is designed,along with the construction of a TSN test platform implementing two representative cases.Case A is a closed TSN domain scenario with pure TSN switches emulating closed factory floor network.Case B depicts remote factory interconnection where TSN domains link via non-TSN domains composed of OpenFlow switches.Results from Case A show that delay and delay jitter on a single node are most significantly affected by time errors,up to one gating cycle.End-to-end delay jitter tends to increase with the number of hops.When the ratio of time error bound to window exceeds 10%,the number of schedulable traffic flows decreases rapidly.Case B reveals that when time error is below 1μs,the number of schedulable traffic flows begins to increase significantly,approaching full schedulability at errors below 0.6μs.
基金supported by the National Natural Science Foundation of China(Grant Nos.42172318 and 42377186)the National Key R&D Program of China(Grant No.2023YFC3007201).
文摘The rise in construction activities within mountainous regions has significantly increased the frequency of rockfalls.Statistical models for rockfall hazard assessment often struggle to achieve high precision on a large scale.This limitation arises primarily from the scarcity of historical rockfall data and the inadequacy of conventional assessment indicators in capturing the physical and structural characteristics of rockfalls.This study proposes a physically based deterministic model designed to accurately quantify rockfall hazards at a large scale.The model accounts for multiple rockfall failure modes and incorporates the key physical and structural parameters of the rock mass.Rockfall hazard is defined as the product of three factors:the rockfall failure probability,the probability of reaching a specific position,and the corresponding impact intensity.The failure probability includes probabilities of formation and instability of rock blocks under different failure modes,modeled based on the combination patterns of slope surfaces and rock discontinuities.The Monte Carlo method is employed to account for the randomness of mechanical and geometric parameters when quantifying instability probabilities.Additionally,the rock trajectories and impact energies simulated using Flow-R software are combined with rockfall failure probability to enable regional rockfall hazard zoning.A case study was conducted in Tiefeng,Chongqing,China,considering four types of rockfall failure modes.Hazard zoning results identified the steep and elevated terrains of the northern and southern anaclinal slopes as areas of highest rockfall hazard.These findings align with observed conditions,providing detailed hazard zoning and validating the effectiveness and potential of the proposed model.
基金funded in part by the Humanities and Social Sciences Planning Foundation of Ministry of Education of China under Grant No.24YJAZH123National Undergraduate Innovation and Entrepreneurship Training Program of China under Grant No.202510347069the Huzhou Science and Technology Planning Foundation under Grant No.2023GZ04.
文摘The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios.
文摘针对6G时代空天地一体化网络架构中空基网络无人集群对时延敏感型业务提出的确定性传输需求,提出一种基于跨层优化的确定性无人集群网络协议——LPCO(link-scheduling and path-planning based on cross-layer optimization)。该协议在中心节点控制的集中式架构下,融合链路状态感知、业务需求感知、显式路径规划和周期时隙调度等关键模块,实现了网络层路径选择与MAC层时隙分配的联合调度,构建了端到端的跨层协同优化机制。具体方法包括:在中心控制节点完成对全网拓扑感知后,通过基于链路过期时间(link expiration time, LET)进行链路预测,利用QoS加权目标函数实现路径规划,结合TDMA对路径节点动态分配时隙,确保关键业务在无人集群网络下的低延迟传输和高可靠性。通过在NS3仿真平台中与典型路由协议OLSR和DSDV对比验证,仿真实验结果表明:LPCO相较OLSR与DSDV协议,时敏业务的平均端到端时延分别降低了43.6%、40.7%;时敏业务的分组投递率提升了69.3%、73.5%;瞬时时延抖动控制在3 ms以内,部分数据包抖动接近0,在无人集群网络中展现出确定性服务保障能力与鲁棒性。
文摘Deterministic optimization methods are combined with the Pareto front concept to solve multi-criterion design problems. The algorithm and the numerical implementation are applied to aerodynamic designs. Evolutionary algorithms (EAs) and the Pareto front concept are used to solve practical design problems in industry for its robustness in capturing convex, concave, discrete or discontinuous Pareto fronts of multi-objective optimization problems. However, the process is time-consuming. Therefore, deterministic optimization methods are introduced to capture the Pareto front, and the types of the captured Pareto front are explained. Numerical experiments show that the deterministic optimization method is a good alternative to EAs for capturing any convex and some concave Pareto fronts in multi-criterion aerodynamic optimization problems due to its efficiency.
文摘针对柔性直流输电系统(voltage source converter based high voltage direct current transmission,VSC-HVDC)控制参数设计过程中存在的鲁棒性差、依赖已知电路参数、工程设计经验化等问题,提出一种基于马尔科夫转换场(Markov transition field,MTF)与深度确定性策略梯度算法(deep deterministic policy gradient,DDPG)结合的鲁棒性强、不依赖电路参数特性以及可视化的VSC-HVDC控制参数优化设计方法。首先,采用马尔科夫转换场将电路功率、电压等一维时序波形数据转换为二维马尔科夫转换场域图像并使用马尔科夫转换场损失函数(Markov transition field loss,MTFL)判断二维转换域图的数据波动性;其次,将MTFL损失函数与DDPG算法相结合,综合利用MTFL损失函数对系统输出时序数据动态特性评价能力更强的优点和DDPG算法泛化性能优秀的特点,实现VSC-HVDC系统控制参数优化;最后,通过MATLAB模拟和实验结果验证该方法的有效性。
文摘Hepatitis B Virus(HBV)infection and heavy alcohol consumption are the two primary pathogenic causes of liver cirrhosis.In this paper,we proposed a deterministic mathematical model and a logistic equation to investigate the dynamics of liver cirrhosis progression as well as to explain the implications of variations in alcohol consumption on chronic hepatitis B patients,respectively.The intricate interactions between liver cirrhosis,recovery,and treatment dynamics are captured by the model.This study aims to show that alcohol consumption by Hepatitis B-infected individuals accelerates liver cirrhosis progression while treatment of acutely infected individuals reduces it.We proved that a unique solution of the proposed model exists,which is positive and bounded.Using the next-generation matrix approach,two basic reproductive numbers R_(A_(0))and R_(A_(max))are calculated to identify future recurrence.The equilibrium points are calculated,and both equilibria are proved locally and globally asymptotically stable when R_(0)is below and above one,respectively.It is shown that bifurcation exists at R_(0)=1 and a detailed proof for forward bifurcation is given.Furthermore,we performed the sensitivity analysis of the model parameters on R_(0).For the confirmation of analytical work,we performed numerical simulations,and the results indicate that the treatment and the inhibitory effects reduce the risk of developing liver cirrhosis in individuals,while heavy alcohol consumption accelerates markedly the liver cirrhosis progression in patients with chronic hepatitis B.