The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches of...The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches often overlook practical input constraints or rely on centralized information. To address these issues, a novel edge-based double-layer adaptive control framework is proposed. Specifically, adaptive scaling parameters are embedded into the edge weights of the communication graph, enabling a fully distributed scheme that avoids dependence on centralized or global knowledge. Every participant modifies its strategy by exclusively utilizing local information and communicating with its neighbors to iteratively approach the NE. By incorporating damping terms into the design of the adaptive parameters, the proposed approach effectively suppresses unbounded parameter growth and consequently guarantees the boundedness of the adaptive gains. In addition, to account for actuator saturation, the proposed distributed NE seeking approach incorporates a saturation function, which ensures that control inputs do not exceed allowable ranges. A rigorous Lyapunov-based analysis guarantees the convergence and boundedness of all system variables. Finally, the presentation of simulation results aims to validate the efficacy and theoretical soundness of the proposed approach.展开更多
Driven by practical applications, the achievement of distributed observers for nonlinear systems has emerged as a crucial advancement in recent years. However, existing theoretical advancements face certain limitation...Driven by practical applications, the achievement of distributed observers for nonlinear systems has emerged as a crucial advancement in recent years. However, existing theoretical advancements face certain limitations: They either fail to address more complex nonlinear phenomena, rely on hard-to-verify assumptions, or encounter difficulties in solving system parameters.Consequently, this paper aims to address these challenges by investigating distributed observers for nonlinear systems through the full-measured canonical form(FMCF), which is inspired by full-measured system(FMS) theory. To begin with, this study addresses the fact that the FMCF can only be obtained through the observable canonical form(OCF) in existing FMS theories.The paper demonstrates that a class of nonlinear systems can directly obtain FMCF through state space equations, independent of OCF. Also, a general method for solving FMCF in such systems is provided. Furthermore, based on the FMCF, A distributed observer is developed for nonlinear systems under two scenarios: Lipschitz conditions and open-loop bounded conditions.The paper establishes their asymptotic omniscience and demonstrates that the designed distributed observer in this study has fewer design parameters and is more convenient to construct than existing approaches. Finally, the effectiveness of the proposed methods is validated through simulation results on Van der Pol oscillators and microgrid systems.展开更多
This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to t...This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.展开更多
Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and ...Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.展开更多
With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intr...With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.展开更多
A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relax...A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.展开更多
Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on ho...Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.展开更多
This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investig...This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investigates the non-uniform load gradient effect on fracture characteristics,including load characteristics,fracture location,fracture distribution,and section roughness.A digital model for fracture interface buckling analysis was developed,elucidating the influence of non-uniform load gradients on Fracture Interface Curvature(FIC),Buckling Rate of Change(BRC),and Buckling Domain Field(BDF).The findings reveal that nonlinear tensile stress concentration and abrupt tensile-compressive-shear strain mutations under non-uniform loading are fundamental mechanisms driving fracture path buckling in cantilever rock mass structures.The buckling process of rock mass under non-uniform load can be divided into two stages:low load gradient and high gradient load.In the stage of low gradient load,the buckling behavior is mainly reflected in the compression-shear fracture of the edge.In the stage of high gradient load,a buckling band along the loading direction is gradually formed in the rock mass.These buckling principles establish a theoretical basis for accurately characterizing bearing fractures,fracture interface instability,and vibration sources within overlying cantilever rock masses in goaf.展开更多
To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-r...To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-ray computed tomography were combined to obtain the strain distribution over the sample surface and internal fractures of the samples.The Gini and skewness(G-S)coefficients were used to quantify strain localization during tests,where the Gini coefficient reflects the degree of clustering of elements with high strain values,i.e.,strain localization/delocalization.The strain localization-induced asymmetry of data distribution is quantified by the skewness coefficient.A precursor to granite failure is defined by the rapid and simultaneous increase of the G-S coefficients,which are calculated from strain increment,giving an earlier warning of failure by about 8%peak stress than those from absolute strain values.Moreover,the process of damage accumulation due to stress-driven crack propagation in Beishan granite is different at various confining pressures as the stress exceeds the crack initiation stress.Concretely,strain localization is continuous until brittle failure at higher confining pressure,while both strain localization and delocalization occur at lower confining pressure.Despite the different stress conditions,a similar statistical characteristic of strain localization during the creep stage is observed.The Gini coefficient increases,and the skewness coefficient decreases slightly as the creep stress is below 95%peak stress.When the accelerated strain localization begins,the Gini and skewness coefficients increase rapidly and simultaneously.展开更多
In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a meth...In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.展开更多
The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreser...The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.展开更多
含分布式电源(distributed generation,DG)的双极直流配电系统是未来配电网发展的重要形态之一,但由于DG接入方式、数量、容量、位置以及系统正负极负荷不平衡对系统静暂态电压稳定性影响不同,目前相关研究尚缺乏对此问题的分析。该文...含分布式电源(distributed generation,DG)的双极直流配电系统是未来配电网发展的重要形态之一,但由于DG接入方式、数量、容量、位置以及系统正负极负荷不平衡对系统静暂态电压稳定性影响不同,目前相关研究尚缺乏对此问题的分析。该文首先将DG等效为受控电流源,推导分析了DG接入方式、容量及负荷不平衡度对系统静态下电压不平衡度的影响;其次,基于单极故障下光伏型DG与交流电网暂态放电情况,推导分析了DG接入方式、位置、容量与系统暂态电压稳定性的关系;再者,基于多目标蜣螂优化算法提出以系统静暂态电压稳定性与DG接入成本为目标的DG接入方案规划方法,采用熵权逼近理想解排序法(technique for order preference by similarity to ideal solution,TOPSIS)法筛选出DG接入的最佳折中方案。最后在Matlab/Simulink仿真平台搭建改进IEEE14、IEEE33双极直流配电系统验证该文所提优化方法的普适性和有效性。展开更多
基金supported by the National Natural Science Foundation of China (Grant No.62173009)the National Key Research and Development Program of China (Grant No.2021ZD0112302)。
文摘The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches often overlook practical input constraints or rely on centralized information. To address these issues, a novel edge-based double-layer adaptive control framework is proposed. Specifically, adaptive scaling parameters are embedded into the edge weights of the communication graph, enabling a fully distributed scheme that avoids dependence on centralized or global knowledge. Every participant modifies its strategy by exclusively utilizing local information and communicating with its neighbors to iteratively approach the NE. By incorporating damping terms into the design of the adaptive parameters, the proposed approach effectively suppresses unbounded parameter growth and consequently guarantees the boundedness of the adaptive gains. In addition, to account for actuator saturation, the proposed distributed NE seeking approach incorporates a saturation function, which ensures that control inputs do not exceed allowable ranges. A rigorous Lyapunov-based analysis guarantees the convergence and boundedness of all system variables. Finally, the presentation of simulation results aims to validate the efficacy and theoretical soundness of the proposed approach.
基金supported by the National Natural Science Foundation of China(62133008,62303273,62188101,62373226,62473173)Young Taishan Scholars Program of Shandong Province of China(tsqn202408206)+2 种基金a Project of Shandong Province Higher Educational Youth and Innovation Talent Introduction and Education Programthe Natural Science Foundation of Shandong Province,China(ZR2023QF072)China Postdoctoral Science Foundation(2022M721932)
文摘Driven by practical applications, the achievement of distributed observers for nonlinear systems has emerged as a crucial advancement in recent years. However, existing theoretical advancements face certain limitations: They either fail to address more complex nonlinear phenomena, rely on hard-to-verify assumptions, or encounter difficulties in solving system parameters.Consequently, this paper aims to address these challenges by investigating distributed observers for nonlinear systems through the full-measured canonical form(FMCF), which is inspired by full-measured system(FMS) theory. To begin with, this study addresses the fact that the FMCF can only be obtained through the observable canonical form(OCF) in existing FMS theories.The paper demonstrates that a class of nonlinear systems can directly obtain FMCF through state space equations, independent of OCF. Also, a general method for solving FMCF in such systems is provided. Furthermore, based on the FMCF, A distributed observer is developed for nonlinear systems under two scenarios: Lipschitz conditions and open-loop bounded conditions.The paper establishes their asymptotic omniscience and demonstrates that the designed distributed observer in this study has fewer design parameters and is more convenient to construct than existing approaches. Finally, the effectiveness of the proposed methods is validated through simulation results on Van der Pol oscillators and microgrid systems.
基金supported by the National Key Research and Development Program of China(2025YFE0213100)the National Natural Science Foundation of China(62422315,62573348)+1 种基金the Natural Science Basic Research Program of Shaanxi(2025JC-YBMS-667)the“Shuang Yi Liu”Construction Foundation(25GH02010366)。
文摘This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.
基金supported by ScientificResearch Fund of National Health Commission of the People’s Republic of China-Major Science and Technology Program for Medicine and Health in Zhejiang Province(WKJ-ZJ-2406).
文摘Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.
基金supported by the Research year project of the KongjuNational University in 2025 and the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2024-00444170,Research and International Collaboration on Trust Model-Based Intelligent Incident Response Technologies in 6G Open Network Environment).
文摘With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.
基金support of her postdoctoral research at the GFZ Helmholtz Centre for Geosciences.P.Pan acknowledges the financial support of the National Natural Science Foundation of China(Grant No.52339001)H.Hofmann and Y.Ji acknowledge the financial support of the Helmholtz Association's Initiative and Networking Fund for the Helmholtz Young Investigator Group ARES(contract number VH-NG-1516).
文摘A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.
文摘Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.
基金support provided by the National Natural Science Foundation of China(No.52274077)the Natural Science Foundation of Henan(No.242300421072)+2 种基金the Youth Elite Teachers Cultivation Program for Higher Education Institutions in Henan Province(No.2024GGJS036)the Funds for Distinguished Young Scholars of Henan Polytechnic University(No.J2023-3)the Young Core Teacher Funding Scheme of Henan Polytechnic University(No.2023XQG-09).
文摘This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investigates the non-uniform load gradient effect on fracture characteristics,including load characteristics,fracture location,fracture distribution,and section roughness.A digital model for fracture interface buckling analysis was developed,elucidating the influence of non-uniform load gradients on Fracture Interface Curvature(FIC),Buckling Rate of Change(BRC),and Buckling Domain Field(BDF).The findings reveal that nonlinear tensile stress concentration and abrupt tensile-compressive-shear strain mutations under non-uniform loading are fundamental mechanisms driving fracture path buckling in cantilever rock mass structures.The buckling process of rock mass under non-uniform load can be divided into two stages:low load gradient and high gradient load.In the stage of low gradient load,the buckling behavior is mainly reflected in the compression-shear fracture of the edge.In the stage of high gradient load,a buckling band along the loading direction is gradually formed in the rock mass.These buckling principles establish a theoretical basis for accurately characterizing bearing fractures,fracture interface instability,and vibration sources within overlying cantilever rock masses in goaf.
基金supported by the National Natural Science Foundation of China(Grant No.52339001).
文摘To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-ray computed tomography were combined to obtain the strain distribution over the sample surface and internal fractures of the samples.The Gini and skewness(G-S)coefficients were used to quantify strain localization during tests,where the Gini coefficient reflects the degree of clustering of elements with high strain values,i.e.,strain localization/delocalization.The strain localization-induced asymmetry of data distribution is quantified by the skewness coefficient.A precursor to granite failure is defined by the rapid and simultaneous increase of the G-S coefficients,which are calculated from strain increment,giving an earlier warning of failure by about 8%peak stress than those from absolute strain values.Moreover,the process of damage accumulation due to stress-driven crack propagation in Beishan granite is different at various confining pressures as the stress exceeds the crack initiation stress.Concretely,strain localization is continuous until brittle failure at higher confining pressure,while both strain localization and delocalization occur at lower confining pressure.Despite the different stress conditions,a similar statistical characteristic of strain localization during the creep stage is observed.The Gini coefficient increases,and the skewness coefficient decreases slightly as the creep stress is below 95%peak stress.When the accelerated strain localization begins,the Gini and skewness coefficients increase rapidly and simultaneously.
文摘In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.
文摘The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.
文摘含分布式电源(distributed generation,DG)的双极直流配电系统是未来配电网发展的重要形态之一,但由于DG接入方式、数量、容量、位置以及系统正负极负荷不平衡对系统静暂态电压稳定性影响不同,目前相关研究尚缺乏对此问题的分析。该文首先将DG等效为受控电流源,推导分析了DG接入方式、容量及负荷不平衡度对系统静态下电压不平衡度的影响;其次,基于单极故障下光伏型DG与交流电网暂态放电情况,推导分析了DG接入方式、位置、容量与系统暂态电压稳定性的关系;再者,基于多目标蜣螂优化算法提出以系统静暂态电压稳定性与DG接入成本为目标的DG接入方案规划方法,采用熵权逼近理想解排序法(technique for order preference by similarity to ideal solution,TOPSIS)法筛选出DG接入的最佳折中方案。最后在Matlab/Simulink仿真平台搭建改进IEEE14、IEEE33双极直流配电系统验证该文所提优化方法的普适性和有效性。