This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to t...This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.展开更多
Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and ...Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.展开更多
With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intr...With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.展开更多
A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relax...A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.展开更多
Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on ho...Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.展开更多
The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches of...The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches often overlook practical input constraints or rely on centralized information. To address these issues, a novel edge-based double-layer adaptive control framework is proposed. Specifically, adaptive scaling parameters are embedded into the edge weights of the communication graph, enabling a fully distributed scheme that avoids dependence on centralized or global knowledge. Every participant modifies its strategy by exclusively utilizing local information and communicating with its neighbors to iteratively approach the NE. By incorporating damping terms into the design of the adaptive parameters, the proposed approach effectively suppresses unbounded parameter growth and consequently guarantees the boundedness of the adaptive gains. In addition, to account for actuator saturation, the proposed distributed NE seeking approach incorporates a saturation function, which ensures that control inputs do not exceed allowable ranges. A rigorous Lyapunov-based analysis guarantees the convergence and boundedness of all system variables. Finally, the presentation of simulation results aims to validate the efficacy and theoretical soundness of the proposed approach.展开更多
This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investig...This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investigates the non-uniform load gradient effect on fracture characteristics,including load characteristics,fracture location,fracture distribution,and section roughness.A digital model for fracture interface buckling analysis was developed,elucidating the influence of non-uniform load gradients on Fracture Interface Curvature(FIC),Buckling Rate of Change(BRC),and Buckling Domain Field(BDF).The findings reveal that nonlinear tensile stress concentration and abrupt tensile-compressive-shear strain mutations under non-uniform loading are fundamental mechanisms driving fracture path buckling in cantilever rock mass structures.The buckling process of rock mass under non-uniform load can be divided into two stages:low load gradient and high gradient load.In the stage of low gradient load,the buckling behavior is mainly reflected in the compression-shear fracture of the edge.In the stage of high gradient load,a buckling band along the loading direction is gradually formed in the rock mass.These buckling principles establish a theoretical basis for accurately characterizing bearing fractures,fracture interface instability,and vibration sources within overlying cantilever rock masses in goaf.展开更多
To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-r...To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-ray computed tomography were combined to obtain the strain distribution over the sample surface and internal fractures of the samples.The Gini and skewness(G-S)coefficients were used to quantify strain localization during tests,where the Gini coefficient reflects the degree of clustering of elements with high strain values,i.e.,strain localization/delocalization.The strain localization-induced asymmetry of data distribution is quantified by the skewness coefficient.A precursor to granite failure is defined by the rapid and simultaneous increase of the G-S coefficients,which are calculated from strain increment,giving an earlier warning of failure by about 8%peak stress than those from absolute strain values.Moreover,the process of damage accumulation due to stress-driven crack propagation in Beishan granite is different at various confining pressures as the stress exceeds the crack initiation stress.Concretely,strain localization is continuous until brittle failure at higher confining pressure,while both strain localization and delocalization occur at lower confining pressure.Despite the different stress conditions,a similar statistical characteristic of strain localization during the creep stage is observed.The Gini coefficient increases,and the skewness coefficient decreases slightly as the creep stress is below 95%peak stress.When the accelerated strain localization begins,the Gini and skewness coefficients increase rapidly and simultaneously.展开更多
In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a meth...In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.展开更多
With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become ...With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become a research hotspot, and the security of the oracle responsible for providing reliable data has attracted much attention. The most widely used centralized oracles in blockchain, such as Provable and Town Crier, all rely on a single oracle to obtain data, which suffers from a single point of failure and limits the large-scale development of blockchain. To this end, the distributed oracle scheme is put forward, but the existing distributed oracle schemes such as Chainlink and Augur generally have low execution efficiency and high communication overhead, which leads to their poor applicability. To solve the above problems, this paper proposes a trusted distributed oracle scheme based on a share recovery threshold signature. First, a data verification method of distributed oracles is designed based on threshold signature. By aggregating the signatures of oracles, data from different data sources can be mutually verified, leading to a more efficient data verification and aggregation process. Then, a credibility-based cluster head election algorithm is designed, which reduces the communication overhead by clarifying the function distribution and building a hierarchical structure. Considering the good performance of the BLS threshold signature in large-scale applications, this paper combines it with distributed oracle technology and proposes a BLS threshold signature algorithm that supports share recovery in distributed oracles. The share recovery mechanism enables the proposed scheme to solve the key loss issue, and the setting of the threshold value enables the proposed scheme to complete signature aggregation with only a threshold number of oracles, making the scheme more robust. Finally, experimental results indicate that, by using the threshold signature technology and the cluster head election algorithm, our scheme effectively improves the execution efficiency of oracles and solves the problem of a single point of failure, leading to higher scalability and robustness.展开更多
With the advent of in-wheel motors and corner modules,the structure of vehicle chassis subsystems has shifted from traditionally centralized to distributed.This review focuses on the distributed chassis system(DCS)equ...With the advent of in-wheel motors and corner modules,the structure of vehicle chassis subsystems has shifted from traditionally centralized to distributed.This review focuses on the distributed chassis system(DCS)equipped with corner modules.It first provides a comprehensive summary and description of the revolution of the structure and control methods of vehicle chassis systems(including driving,braking,suspension,and steering systems).Given that DCS integrates various chassis subsystems,this review moves beyond individual subsystem analysis and delves into the coordination of these subsystems at the vehicle level.It provides a detailed summary of the methods and architectures used for integrated coordination and control,ensuring that multiple subsystems can function seamlessly as an integrated whole.Finally,this review summarizes the latest distributed control architecture for DCS.It also examines current control theories in the fields of control and information technology for distributed systems,such as multi-agent systems and cyber-physical systems.Based on these two control approaches,a multi-domain cooperative control framework for DCS is proposed.展开更多
This paper designs distributed Nash equilibrium seeking strategies for heterogeneous dynamic cyber-physical systems.In particular, we are concerned with parametric uncertainties in the control channel of the players. ...This paper designs distributed Nash equilibrium seeking strategies for heterogeneous dynamic cyber-physical systems.In particular, we are concerned with parametric uncertainties in the control channel of the players. Moreover, the weights on communication links can be compromised by time-varying uncertainties, which can result from possibly malicious attacks,faults and disturbances. To deal with the unavailability of measurement of optimization errors, an output observer is constructed,based on which adaptive laws are designed to compensate for physical uncertainties. With adaptive laws, a new distributed Nash equilibrium seeking strategy is designed by further integrating consensus protocols and gradient search algorithms.Moreover, to further accommodate compromised communication weights resulting from cyber-uncertainties, the coupling strengths of the consensus module are designed to be adaptive. As a byproduct, the coupling strengths are independent of any global information. With theoretical investigations, it is proven that the proposed strategies are resilient to these uncertainties and players' actions are convergent to the Nash equilibrium. Simulation examples are given to numerically validate the effectiveness of the proposed strategies.展开更多
Responding to the stochasticity and uncertainty in the power height of distributed photovoltaic power generation.This paper presents a distributed photovoltaic ultra-short-term power forecasting method based on Variat...Responding to the stochasticity and uncertainty in the power height of distributed photovoltaic power generation.This paper presents a distributed photovoltaic ultra-short-term power forecasting method based on Variational Mode Decomposition(VMD)and Channel Attention Mechanism.First,Pearson’s correlation coefficient was utilized to filter out the meteorological factors that had a high impact on historical power.Second,the distributed PV power data were decomposed into a relatively smooth power series with different fluctuation patterns using variational modal decomposition(VMD).Finally,the reconstructed distributed PV power as well as other features are input into the combined CNN-SENet-BiLSTM model.In this model,the convolutional neural network(CNN)and channel attention mechanism dynamically adjust the weights while capturing the spatial features of the input data to improve the discriminative ability of key features.The extracted data is then fed into the bidirectional long short-term memory network(BiLSTM)to capture the time-series features,and the final output is the prediction result.The verification is conducted using a dataset from a distributed photovoltaic power station in the Northwest region of China.The results show that compared with other prediction methods,the method proposed in this paper has a higher prediction accuracy,which helps to improve the proportion of distributed PV access to the grid,and can guarantee the safe and stable operation of the power grid.展开更多
After a century of relative stability in the electricity sector,the widespread adoption of distributed energy resources,along with recent advancements in computing and communication technologies,has fundamentally alte...After a century of relative stability in the electricity sector,the widespread adoption of distributed energy resources,along with recent advancements in computing and communication technologies,has fundamentally altered how energy is consumed,traded,and utilized.This change signifies a crucial shift as the power system evolves from its traditional hierarchical organization to a more decentralized approach.At the heart of this transformation are innovative energy distribution models,like peer-to-peer(P2P)sharing,which enable communities to collaboratively manage their energy resources.The effectiveness of P2P sharing not only improves the economic prospects for prosumers,who generate and consume energy,but also enhances energy resilience and sustainability.This allows communities to better leverage local resources while fostering a sense of collective responsibility and collaboration in energy management.However,there is still no extensive implementation of such sharing models in today’s electricitymarkets.Research on distributed energy P2P trading is still in the exploratory stage,and it is particularly important to comprehensively understand and analyze the existing distributed energy P2P trading market.This paper contributes with an overview of the P2P markets that starts with the network framework,market structure,technical approach for trading mechanism,and blockchain technology,moving to the outlook in this field.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
In this paper,we investigate the periodic traveling wave solutions problem for a single population model with advection and distributed delay.By the bifurcation analysis method,we can obtain periodic traveling wave so...In this paper,we investigate the periodic traveling wave solutions problem for a single population model with advection and distributed delay.By the bifurcation analysis method,we can obtain periodic traveling wave solutions for this model under the influence of advection term and distributed delay.The obtained results indicate that weak kernel and strong kernel can both deduce the existence of periodic traveling wave solutions.Finally,we apply the main results in this paper to Logistic model and Nicholson’s blowflies model.展开更多
The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level m...The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.展开更多
Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition d...Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition degree can easily be adjusted simultaneously.This paper introduces a new type of pliable FR codes,called absolute balanced pliable FR(ABPFR)codes,in which the access balancing in DSS is considered.Additionally,the equivalence between pliable FR codes and resolvable transversal packings in combinatorial design theory is presented.Then constructions of pliable FR codes and ABPFR codes based on resolvable transversal packings are presented.展开更多
基金supported by the National Key Research and Development Program of China(2025YFE0213100)the National Natural Science Foundation of China(62422315,62573348)+1 种基金the Natural Science Basic Research Program of Shaanxi(2025JC-YBMS-667)the“Shuang Yi Liu”Construction Foundation(25GH02010366)。
文摘This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.
基金supported by ScientificResearch Fund of National Health Commission of the People’s Republic of China-Major Science and Technology Program for Medicine and Health in Zhejiang Province(WKJ-ZJ-2406).
文摘Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.
基金supported by the Research year project of the KongjuNational University in 2025 and the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2024-00444170,Research and International Collaboration on Trust Model-Based Intelligent Incident Response Technologies in 6G Open Network Environment).
文摘With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.
基金support of her postdoctoral research at the GFZ Helmholtz Centre for Geosciences.P.Pan acknowledges the financial support of the National Natural Science Foundation of China(Grant No.52339001)H.Hofmann and Y.Ji acknowledge the financial support of the Helmholtz Association's Initiative and Networking Fund for the Helmholtz Young Investigator Group ARES(contract number VH-NG-1516).
文摘A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.
文摘Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.
基金supported by the National Natural Science Foundation of China (Grant No.62173009)the National Key Research and Development Program of China (Grant No.2021ZD0112302)。
文摘The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches often overlook practical input constraints or rely on centralized information. To address these issues, a novel edge-based double-layer adaptive control framework is proposed. Specifically, adaptive scaling parameters are embedded into the edge weights of the communication graph, enabling a fully distributed scheme that avoids dependence on centralized or global knowledge. Every participant modifies its strategy by exclusively utilizing local information and communicating with its neighbors to iteratively approach the NE. By incorporating damping terms into the design of the adaptive parameters, the proposed approach effectively suppresses unbounded parameter growth and consequently guarantees the boundedness of the adaptive gains. In addition, to account for actuator saturation, the proposed distributed NE seeking approach incorporates a saturation function, which ensures that control inputs do not exceed allowable ranges. A rigorous Lyapunov-based analysis guarantees the convergence and boundedness of all system variables. Finally, the presentation of simulation results aims to validate the efficacy and theoretical soundness of the proposed approach.
基金support provided by the National Natural Science Foundation of China(No.52274077)the Natural Science Foundation of Henan(No.242300421072)+2 种基金the Youth Elite Teachers Cultivation Program for Higher Education Institutions in Henan Province(No.2024GGJS036)the Funds for Distinguished Young Scholars of Henan Polytechnic University(No.J2023-3)the Young Core Teacher Funding Scheme of Henan Polytechnic University(No.2023XQG-09).
文摘This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investigates the non-uniform load gradient effect on fracture characteristics,including load characteristics,fracture location,fracture distribution,and section roughness.A digital model for fracture interface buckling analysis was developed,elucidating the influence of non-uniform load gradients on Fracture Interface Curvature(FIC),Buckling Rate of Change(BRC),and Buckling Domain Field(BDF).The findings reveal that nonlinear tensile stress concentration and abrupt tensile-compressive-shear strain mutations under non-uniform loading are fundamental mechanisms driving fracture path buckling in cantilever rock mass structures.The buckling process of rock mass under non-uniform load can be divided into two stages:low load gradient and high gradient load.In the stage of low gradient load,the buckling behavior is mainly reflected in the compression-shear fracture of the edge.In the stage of high gradient load,a buckling band along the loading direction is gradually formed in the rock mass.These buckling principles establish a theoretical basis for accurately characterizing bearing fractures,fracture interface instability,and vibration sources within overlying cantilever rock masses in goaf.
基金supported by the National Natural Science Foundation of China(Grant No.52339001).
文摘To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-ray computed tomography were combined to obtain the strain distribution over the sample surface and internal fractures of the samples.The Gini and skewness(G-S)coefficients were used to quantify strain localization during tests,where the Gini coefficient reflects the degree of clustering of elements with high strain values,i.e.,strain localization/delocalization.The strain localization-induced asymmetry of data distribution is quantified by the skewness coefficient.A precursor to granite failure is defined by the rapid and simultaneous increase of the G-S coefficients,which are calculated from strain increment,giving an earlier warning of failure by about 8%peak stress than those from absolute strain values.Moreover,the process of damage accumulation due to stress-driven crack propagation in Beishan granite is different at various confining pressures as the stress exceeds the crack initiation stress.Concretely,strain localization is continuous until brittle failure at higher confining pressure,while both strain localization and delocalization occur at lower confining pressure.Despite the different stress conditions,a similar statistical characteristic of strain localization during the creep stage is observed.The Gini coefficient increases,and the skewness coefficient decreases slightly as the creep stress is below 95%peak stress.When the accelerated strain localization begins,the Gini and skewness coefficients increase rapidly and simultaneously.
文摘In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.
基金supported by the National Natural Science Foundation of China(Grant No.62102449)the Central Plains Talent Program under Grant No.224200510003.
文摘With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become a research hotspot, and the security of the oracle responsible for providing reliable data has attracted much attention. The most widely used centralized oracles in blockchain, such as Provable and Town Crier, all rely on a single oracle to obtain data, which suffers from a single point of failure and limits the large-scale development of blockchain. To this end, the distributed oracle scheme is put forward, but the existing distributed oracle schemes such as Chainlink and Augur generally have low execution efficiency and high communication overhead, which leads to their poor applicability. To solve the above problems, this paper proposes a trusted distributed oracle scheme based on a share recovery threshold signature. First, a data verification method of distributed oracles is designed based on threshold signature. By aggregating the signatures of oracles, data from different data sources can be mutually verified, leading to a more efficient data verification and aggregation process. Then, a credibility-based cluster head election algorithm is designed, which reduces the communication overhead by clarifying the function distribution and building a hierarchical structure. Considering the good performance of the BLS threshold signature in large-scale applications, this paper combines it with distributed oracle technology and proposes a BLS threshold signature algorithm that supports share recovery in distributed oracles. The share recovery mechanism enables the proposed scheme to solve the key loss issue, and the setting of the threshold value enables the proposed scheme to complete signature aggregation with only a threshold number of oracles, making the scheme more robust. Finally, experimental results indicate that, by using the threshold signature technology and the cluster head election algorithm, our scheme effectively improves the execution efficiency of oracles and solves the problem of a single point of failure, leading to higher scalability and robustness.
基金Supported by National Natural Science Foundation of China(Grant Nos.52072072,52025121,52394263).
文摘With the advent of in-wheel motors and corner modules,the structure of vehicle chassis subsystems has shifted from traditionally centralized to distributed.This review focuses on the distributed chassis system(DCS)equipped with corner modules.It first provides a comprehensive summary and description of the revolution of the structure and control methods of vehicle chassis systems(including driving,braking,suspension,and steering systems).Given that DCS integrates various chassis subsystems,this review moves beyond individual subsystem analysis and delves into the coordination of these subsystems at the vehicle level.It provides a detailed summary of the methods and architectures used for integrated coordination and control,ensuring that multiple subsystems can function seamlessly as an integrated whole.Finally,this review summarizes the latest distributed control architecture for DCS.It also examines current control theories in the fields of control and information technology for distributed systems,such as multi-agent systems and cyber-physical systems.Based on these two control approaches,a multi-domain cooperative control framework for DCS is proposed.
基金supported by the National Key R&D Program of China(2022ZD0119604)the National Natural Science Foundation of China(NSFC)(62173181,62222308,62221004)the Natural Science Foundation of Jiangsu Province(BK20220139)
文摘This paper designs distributed Nash equilibrium seeking strategies for heterogeneous dynamic cyber-physical systems.In particular, we are concerned with parametric uncertainties in the control channel of the players. Moreover, the weights on communication links can be compromised by time-varying uncertainties, which can result from possibly malicious attacks,faults and disturbances. To deal with the unavailability of measurement of optimization errors, an output observer is constructed,based on which adaptive laws are designed to compensate for physical uncertainties. With adaptive laws, a new distributed Nash equilibrium seeking strategy is designed by further integrating consensus protocols and gradient search algorithms.Moreover, to further accommodate compromised communication weights resulting from cyber-uncertainties, the coupling strengths of the consensus module are designed to be adaptive. As a byproduct, the coupling strengths are independent of any global information. With theoretical investigations, it is proven that the proposed strategies are resilient to these uncertainties and players' actions are convergent to the Nash equilibrium. Simulation examples are given to numerically validate the effectiveness of the proposed strategies.
基金supported by the Inner Mongolia Power Company 2024 Staff Innovation Studio Innovation Project“Research on Cluster Output Prediction and Group Control Technology for County-Wide Distributed Photovoltaic Construction”.
文摘Responding to the stochasticity and uncertainty in the power height of distributed photovoltaic power generation.This paper presents a distributed photovoltaic ultra-short-term power forecasting method based on Variational Mode Decomposition(VMD)and Channel Attention Mechanism.First,Pearson’s correlation coefficient was utilized to filter out the meteorological factors that had a high impact on historical power.Second,the distributed PV power data were decomposed into a relatively smooth power series with different fluctuation patterns using variational modal decomposition(VMD).Finally,the reconstructed distributed PV power as well as other features are input into the combined CNN-SENet-BiLSTM model.In this model,the convolutional neural network(CNN)and channel attention mechanism dynamically adjust the weights while capturing the spatial features of the input data to improve the discriminative ability of key features.The extracted data is then fed into the bidirectional long short-term memory network(BiLSTM)to capture the time-series features,and the final output is the prediction result.The verification is conducted using a dataset from a distributed photovoltaic power station in the Northwest region of China.The results show that compared with other prediction methods,the method proposed in this paper has a higher prediction accuracy,which helps to improve the proportion of distributed PV access to the grid,and can guarantee the safe and stable operation of the power grid.
基金funded by the National Natural Science Foundation of China(52167013)the Key Program of Natural Science Foundation of Gansu Province(24JRRA225)Natural Science Foundation of Gansu Province(23JRRA891).
文摘After a century of relative stability in the electricity sector,the widespread adoption of distributed energy resources,along with recent advancements in computing and communication technologies,has fundamentally altered how energy is consumed,traded,and utilized.This change signifies a crucial shift as the power system evolves from its traditional hierarchical organization to a more decentralized approach.At the heart of this transformation are innovative energy distribution models,like peer-to-peer(P2P)sharing,which enable communities to collaboratively manage their energy resources.The effectiveness of P2P sharing not only improves the economic prospects for prosumers,who generate and consume energy,but also enhances energy resilience and sustainability.This allows communities to better leverage local resources while fostering a sense of collective responsibility and collaboration in energy management.However,there is still no extensive implementation of such sharing models in today’s electricitymarkets.Research on distributed energy P2P trading is still in the exploratory stage,and it is particularly important to comprehensively understand and analyze the existing distributed energy P2P trading market.This paper contributes with an overview of the P2P markets that starts with the network framework,market structure,technical approach for trading mechanism,and blockchain technology,moving to the outlook in this field.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
基金Supported by the National Natural Science Foundation of China(12261050)Science and Technology Project of Department of Education of Jiangxi Province(GJJ2201612 and GJJ211027)Natural Science Foundation of Jiangxi Province of China(20212BAB202021)。
文摘In this paper,we investigate the periodic traveling wave solutions problem for a single population model with advection and distributed delay.By the bifurcation analysis method,we can obtain periodic traveling wave solutions for this model under the influence of advection term and distributed delay.The obtained results indicate that weak kernel and strong kernel can both deduce the existence of periodic traveling wave solutions.Finally,we apply the main results in this paper to Logistic model and Nicholson’s blowflies model.
文摘The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.
基金Supported in part by the National Key R&D Program of China(No.2020YFA0712300)NSFC(No.61872353)。
文摘Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition degree can easily be adjusted simultaneously.This paper introduces a new type of pliable FR codes,called absolute balanced pliable FR(ABPFR)codes,in which the access balancing in DSS is considered.Additionally,the equivalence between pliable FR codes and resolvable transversal packings in combinatorial design theory is presented.Then constructions of pliable FR codes and ABPFR codes based on resolvable transversal packings are presented.