Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different e...Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different edge devices varies significantly.As a result,communication overhead increases,which further slows down the convergence process.To address this challenge,we propose a simple yet effective federated learning framework that improves consistency among edge devices.The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates.In parallel,a global momentum is applied during model aggregation,enabling faster convergence while preserving personalization.This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training,without introducing extra memory or communication overhead.We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet.The results confirm the effectiveness of our framework.On CIFAR-100,our method reaches 55%accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%.Even under extreme non-IID scenarios,it delivers significant improvements in both accuracy and communication efficiency.The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM(accessed on 20 October 2025).展开更多
The Haoping 40 m radio telescope at the National Time Service Center,Chinese Academy of Sciences was built in 2014 and is primarily used to observe navigation satellites and pulsars.Since the first successful very lon...The Haoping 40 m radio telescope at the National Time Service Center,Chinese Academy of Sciences was built in 2014 and is primarily used to observe navigation satellites and pulsars.Since the first successful very long baseline interferometry(VLBI)observation of L-band radio source fringes in 2022,ten observations have been made so far.The stations involved in the observations include the Haoping 40 m radio telescope(Haoping),the Tianma 65 m radio telescope(Tianma),the Nanshan 26 m radio telescope(Urumqi),the Guizhou 500 m radio telescope(FAST),the Jilin 13 m radio telescope(Jilin),the Effelsberg 100 m radio telescope(Effelsberg),the Onsala 25 m radio telescope(Onsala),and the Chiang Mai 40 m radio telescope(Chiang Mai).This paper presents details on the specifications of the Haoping 40 m radio telescope,as well as the design of the VLBI experiment,the observation process,and the data processing.We also discuss the analysis of the fringe results involving the Haoping 40 m radio telescope,using Distributed FX Correlator to obtain excellent results.We confirm that the telescope is capable of participating in VLBI observations and performing specific data processing tasks.It can therefore play a greater role in future VLBI observations.展开更多
Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and ...Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.展开更多
This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to t...This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.展开更多
Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on ho...Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.展开更多
With the start of the new year,Wen Congxiang,managing director of Ningbo Nuoding,a company specialising in the recycling of end-of-life vehicles,has been constantly on the move.Much of his time is spent coordinating w...With the start of the new year,Wen Congxiang,managing director of Ningbo Nuoding,a company specialising in the recycling of end-of-life vehicles,has been constantly on the move.Much of his time is spent coordinating with vehicle collection firms,electric bicycle manufacturers and recycled materials distributors,as he works to build partnerships focused on the targeted collection and distribution of recycled products.展开更多
This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investig...This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investigates the non-uniform load gradient effect on fracture characteristics,including load characteristics,fracture location,fracture distribution,and section roughness.A digital model for fracture interface buckling analysis was developed,elucidating the influence of non-uniform load gradients on Fracture Interface Curvature(FIC),Buckling Rate of Change(BRC),and Buckling Domain Field(BDF).The findings reveal that nonlinear tensile stress concentration and abrupt tensile-compressive-shear strain mutations under non-uniform loading are fundamental mechanisms driving fracture path buckling in cantilever rock mass structures.The buckling process of rock mass under non-uniform load can be divided into two stages:low load gradient and high gradient load.In the stage of low gradient load,the buckling behavior is mainly reflected in the compression-shear fracture of the edge.In the stage of high gradient load,a buckling band along the loading direction is gradually formed in the rock mass.These buckling principles establish a theoretical basis for accurately characterizing bearing fractures,fracture interface instability,and vibration sources within overlying cantilever rock masses in goaf.展开更多
A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relax...A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.展开更多
Hypersonic morphing vehicle(HMV)can reconfigure aerodynamic geometries in real time,adapting to diverse needs like multi-mission profiles and wide-speed-range flight,spanwise morphing and sweep angle variation are rep...Hypersonic morphing vehicle(HMV)can reconfigure aerodynamic geometries in real time,adapting to diverse needs like multi-mission profiles and wide-speed-range flight,spanwise morphing and sweep angle variation are representative large-scale wing reconfiguration modes.To meet the HMV's need for an increased lift and a lift to drag ratio during hypersonic maneuverability and cruise or reentry equilibrium glide,this paper proposes an innovative single-DOF coupled morphing-wing system.We then systematically analyze its open-loop kinematics and closed-loop connectivity constraints,and the proposed system integrates three functional modules:the preset locking/release mechanism,the coupled morphing-wing mechanism,and the integrated wing locking with active stiffness control mechanism.Experimental validation confirms stable,continuous morphing under simulated aerodynamic loads.The experimental results indicate:(i)SMA actuators exhibit response times ranging from 18 s to 160 s,providing sufficient force output for wing unlocking;(ii)The integrated wing locking with active stiffness control mechanism effectively secures wing positions while eliminating airframe clearance via SMA actuation,improving the first-order natural frequency by more than 17%;(iii)The distributed aerodynamic loading system enables precise multi-stage follow-up loading during morphing,with the coupled morphing wing maintaining stable,continuous operation under 0-3500 N normal loads and 110-140 N axial force.The proposed single-DOF coupled morphing mechanism not only simplifies and improves structural efficiency but also demonstrates superior performance in locking control,stiffness enhancement,and aerodynamic responsiveness.This establishes a foundational framework for the design of future intelligent morphing configurations and the implementation of flight control systems.展开更多
With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intr...With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.展开更多
Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global...Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated.展开更多
The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches of...The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches often overlook practical input constraints or rely on centralized information. To address these issues, a novel edge-based double-layer adaptive control framework is proposed. Specifically, adaptive scaling parameters are embedded into the edge weights of the communication graph, enabling a fully distributed scheme that avoids dependence on centralized or global knowledge. Every participant modifies its strategy by exclusively utilizing local information and communicating with its neighbors to iteratively approach the NE. By incorporating damping terms into the design of the adaptive parameters, the proposed approach effectively suppresses unbounded parameter growth and consequently guarantees the boundedness of the adaptive gains. In addition, to account for actuator saturation, the proposed distributed NE seeking approach incorporates a saturation function, which ensures that control inputs do not exceed allowable ranges. A rigorous Lyapunov-based analysis guarantees the convergence and boundedness of all system variables. Finally, the presentation of simulation results aims to validate the efficacy and theoretical soundness of the proposed approach.展开更多
To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-r...To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-ray computed tomography were combined to obtain the strain distribution over the sample surface and internal fractures of the samples.The Gini and skewness(G-S)coefficients were used to quantify strain localization during tests,where the Gini coefficient reflects the degree of clustering of elements with high strain values,i.e.,strain localization/delocalization.The strain localization-induced asymmetry of data distribution is quantified by the skewness coefficient.A precursor to granite failure is defined by the rapid and simultaneous increase of the G-S coefficients,which are calculated from strain increment,giving an earlier warning of failure by about 8%peak stress than those from absolute strain values.Moreover,the process of damage accumulation due to stress-driven crack propagation in Beishan granite is different at various confining pressures as the stress exceeds the crack initiation stress.Concretely,strain localization is continuous until brittle failure at higher confining pressure,while both strain localization and delocalization occur at lower confining pressure.Despite the different stress conditions,a similar statistical characteristic of strain localization during the creep stage is observed.The Gini coefficient increases,and the skewness coefficient decreases slightly as the creep stress is below 95%peak stress.When the accelerated strain localization begins,the Gini and skewness coefficients increase rapidly and simultaneously.展开更多
With more and more IoT terminals being deployed in various power grid business scenarios,terminal reliability has become a practical challenge that threatens the current security protection architecture.Most IoT termi...With more and more IoT terminals being deployed in various power grid business scenarios,terminal reliability has become a practical challenge that threatens the current security protection architecture.Most IoT terminals have security risks and vulnerabilities,and limited resources make it impossible to deploy costly security protection methods on the terminal.In order to cope with these problems,this paper proposes a lightweight trust evaluation model TCL,which combines three network models,TCN,CNN,and LSTM,with stronger feature extraction capability and can score the reliability of the device by periodically analyzing the traffic behavior and activity logs generated by the terminal device,and the trust evaluation of the terminal’s continuous behavior can be achieved by combining the scores of different periods.After experiments,it is proved that TCL can effectively use the traffic behaviors and activity logs of terminal devices for trust evaluation and achieves F1-score of 95.763,94.456,99.923,and 99.195 on HDFS,BGL,N-BaIoT,and KDD99 datasets,respectively,and the size of TCL is only 91KB,which can achieve similar or better performance than CNN-LSTM,RobustLog and other methods with less computational resources and storage space.展开更多
Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subn...Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.展开更多
With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows...With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows a multi-step process,beginning with the collection of datasets from different edge devices and network nodes.To verify its effectiveness,experiments were conducted using the CICDoS2017,NSL-KDD,and CICIDS benchmark datasets alongside other existing models.Recursive feature elimination(RFE)with random forest is used to select features from the CICDDoS2019 dataset,on which a BiLSTM model is trained on local nodes.Local models are trained until convergence or stability criteria are met while simultaneously sharing the updates globally for collaborative learning.A centralised server evaluates real-time traffic using the global BiLSTM model,which triggers alerts for potential DDoS attacks.Furthermore,blockchain technology is employed to secure model updates and to provide an immutable audit trail,thereby ensuring trust and accountability among network nodes.This research introduces a novel decentralized method called Federated Random Forest Bidirectional Long Short-Term Memory(FRF-BiLSTM)for detecting DDoS attacks,utilizing the advanced Bidirectional Long Short-Term Memory Networks(BiLSTMs)to analyze sequences in both forward and backward directions.The outcome shows the proposed model achieves a mean accuracy of 97.1%with an average training delay of 88.7 s and testing delay of 21.4 s.The model demonstrates scalability and the best detection performance in large-scale attack scenarios.展开更多
In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a meth...In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become ...With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become a research hotspot, and the security of the oracle responsible for providing reliable data has attracted much attention. The most widely used centralized oracles in blockchain, such as Provable and Town Crier, all rely on a single oracle to obtain data, which suffers from a single point of failure and limits the large-scale development of blockchain. To this end, the distributed oracle scheme is put forward, but the existing distributed oracle schemes such as Chainlink and Augur generally have low execution efficiency and high communication overhead, which leads to their poor applicability. To solve the above problems, this paper proposes a trusted distributed oracle scheme based on a share recovery threshold signature. First, a data verification method of distributed oracles is designed based on threshold signature. By aggregating the signatures of oracles, data from different data sources can be mutually verified, leading to a more efficient data verification and aggregation process. Then, a credibility-based cluster head election algorithm is designed, which reduces the communication overhead by clarifying the function distribution and building a hierarchical structure. Considering the good performance of the BLS threshold signature in large-scale applications, this paper combines it with distributed oracle technology and proposes a BLS threshold signature algorithm that supports share recovery in distributed oracles. The share recovery mechanism enables the proposed scheme to solve the key loss issue, and the setting of the threshold value enables the proposed scheme to complete signature aggregation with only a threshold number of oracles, making the scheme more robust. Finally, experimental results indicate that, by using the threshold signature technology and the cluster head election algorithm, our scheme effectively improves the execution efficiency of oracles and solves the problem of a single point of failure, leading to higher scalability and robustness.展开更多
This paper designs distributed Nash equilibrium seeking strategies for heterogeneous dynamic cyber-physical systems.In particular, we are concerned with parametric uncertainties in the control channel of the players. ...This paper designs distributed Nash equilibrium seeking strategies for heterogeneous dynamic cyber-physical systems.In particular, we are concerned with parametric uncertainties in the control channel of the players. Moreover, the weights on communication links can be compromised by time-varying uncertainties, which can result from possibly malicious attacks,faults and disturbances. To deal with the unavailability of measurement of optimization errors, an output observer is constructed,based on which adaptive laws are designed to compensate for physical uncertainties. With adaptive laws, a new distributed Nash equilibrium seeking strategy is designed by further integrating consensus protocols and gradient search algorithms.Moreover, to further accommodate compromised communication weights resulting from cyber-uncertainties, the coupling strengths of the consensus module are designed to be adaptive. As a byproduct, the coupling strengths are independent of any global information. With theoretical investigations, it is proven that the proposed strategies are resilient to these uncertainties and players' actions are convergent to the Nash equilibrium. Simulation examples are given to numerically validate the effectiveness of the proposed strategies.展开更多
基金supported by the National Natural Science Foundation of China(62462040)the Yunnan Fundamental Research Projects(202501AT070345)the Major Science and Technology Projects in Yunnan Province(202202AD080013).
文摘Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different edge devices varies significantly.As a result,communication overhead increases,which further slows down the convergence process.To address this challenge,we propose a simple yet effective federated learning framework that improves consistency among edge devices.The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates.In parallel,a global momentum is applied during model aggregation,enabling faster convergence while preserving personalization.This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training,without introducing extra memory or communication overhead.We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet.The results confirm the effectiveness of our framework.On CIFAR-100,our method reaches 55%accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%.Even under extreme non-IID scenarios,it delivers significant improvements in both accuracy and communication efficiency.The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM(accessed on 20 October 2025).
基金supported by the National Science and Technology Major Project(E152KJ1201)the Natural Science Basic Research Program of Shaanxi(2024JC-YBQN-0036)+1 种基金the National Natural Science Foundation of China(42030105 and 11973046)the National SKA Program of China(2020SKA0120200).
文摘The Haoping 40 m radio telescope at the National Time Service Center,Chinese Academy of Sciences was built in 2014 and is primarily used to observe navigation satellites and pulsars.Since the first successful very long baseline interferometry(VLBI)observation of L-band radio source fringes in 2022,ten observations have been made so far.The stations involved in the observations include the Haoping 40 m radio telescope(Haoping),the Tianma 65 m radio telescope(Tianma),the Nanshan 26 m radio telescope(Urumqi),the Guizhou 500 m radio telescope(FAST),the Jilin 13 m radio telescope(Jilin),the Effelsberg 100 m radio telescope(Effelsberg),the Onsala 25 m radio telescope(Onsala),and the Chiang Mai 40 m radio telescope(Chiang Mai).This paper presents details on the specifications of the Haoping 40 m radio telescope,as well as the design of the VLBI experiment,the observation process,and the data processing.We also discuss the analysis of the fringe results involving the Haoping 40 m radio telescope,using Distributed FX Correlator to obtain excellent results.We confirm that the telescope is capable of participating in VLBI observations and performing specific data processing tasks.It can therefore play a greater role in future VLBI observations.
基金supported by ScientificResearch Fund of National Health Commission of the People’s Republic of China-Major Science and Technology Program for Medicine and Health in Zhejiang Province(WKJ-ZJ-2406).
文摘Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.
基金supported by the National Key Research and Development Program of China(2025YFE0213100)the National Natural Science Foundation of China(62422315,62573348)+1 种基金the Natural Science Basic Research Program of Shaanxi(2025JC-YBMS-667)the“Shuang Yi Liu”Construction Foundation(25GH02010366)。
文摘This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.
文摘Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.
文摘With the start of the new year,Wen Congxiang,managing director of Ningbo Nuoding,a company specialising in the recycling of end-of-life vehicles,has been constantly on the move.Much of his time is spent coordinating with vehicle collection firms,electric bicycle manufacturers and recycled materials distributors,as he works to build partnerships focused on the targeted collection and distribution of recycled products.
基金support provided by the National Natural Science Foundation of China(No.52274077)the Natural Science Foundation of Henan(No.242300421072)+2 种基金the Youth Elite Teachers Cultivation Program for Higher Education Institutions in Henan Province(No.2024GGJS036)the Funds for Distinguished Young Scholars of Henan Polytechnic University(No.J2023-3)the Young Core Teacher Funding Scheme of Henan Polytechnic University(No.2023XQG-09).
文摘This study examined non-uniform loading in goaf cantilever rock masses via testing,modeling,and mechanical analysis to solve instantaneous fracture and section buckling from mining abutment pressure.The study investigates the non-uniform load gradient effect on fracture characteristics,including load characteristics,fracture location,fracture distribution,and section roughness.A digital model for fracture interface buckling analysis was developed,elucidating the influence of non-uniform load gradients on Fracture Interface Curvature(FIC),Buckling Rate of Change(BRC),and Buckling Domain Field(BDF).The findings reveal that nonlinear tensile stress concentration and abrupt tensile-compressive-shear strain mutations under non-uniform loading are fundamental mechanisms driving fracture path buckling in cantilever rock mass structures.The buckling process of rock mass under non-uniform load can be divided into two stages:low load gradient and high gradient load.In the stage of low gradient load,the buckling behavior is mainly reflected in the compression-shear fracture of the edge.In the stage of high gradient load,a buckling band along the loading direction is gradually formed in the rock mass.These buckling principles establish a theoretical basis for accurately characterizing bearing fractures,fracture interface instability,and vibration sources within overlying cantilever rock masses in goaf.
基金support of her postdoctoral research at the GFZ Helmholtz Centre for Geosciences.P.Pan acknowledges the financial support of the National Natural Science Foundation of China(Grant No.52339001)H.Hofmann and Y.Ji acknowledge the financial support of the Helmholtz Association's Initiative and Networking Fund for the Helmholtz Young Investigator Group ARES(contract number VH-NG-1516).
文摘A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.
基金supported by the National Natural Science Foundation of China(Grant No.52405257)the China Postdoctoral Science Foundation(Grant No.2024M764201).
文摘Hypersonic morphing vehicle(HMV)can reconfigure aerodynamic geometries in real time,adapting to diverse needs like multi-mission profiles and wide-speed-range flight,spanwise morphing and sweep angle variation are representative large-scale wing reconfiguration modes.To meet the HMV's need for an increased lift and a lift to drag ratio during hypersonic maneuverability and cruise or reentry equilibrium glide,this paper proposes an innovative single-DOF coupled morphing-wing system.We then systematically analyze its open-loop kinematics and closed-loop connectivity constraints,and the proposed system integrates three functional modules:the preset locking/release mechanism,the coupled morphing-wing mechanism,and the integrated wing locking with active stiffness control mechanism.Experimental validation confirms stable,continuous morphing under simulated aerodynamic loads.The experimental results indicate:(i)SMA actuators exhibit response times ranging from 18 s to 160 s,providing sufficient force output for wing unlocking;(ii)The integrated wing locking with active stiffness control mechanism effectively secures wing positions while eliminating airframe clearance via SMA actuation,improving the first-order natural frequency by more than 17%;(iii)The distributed aerodynamic loading system enables precise multi-stage follow-up loading during morphing,with the coupled morphing wing maintaining stable,continuous operation under 0-3500 N normal loads and 110-140 N axial force.The proposed single-DOF coupled morphing mechanism not only simplifies and improves structural efficiency but also demonstrates superior performance in locking control,stiffness enhancement,and aerodynamic responsiveness.This establishes a foundational framework for the design of future intelligent morphing configurations and the implementation of flight control systems.
基金supported by the Research year project of the KongjuNational University in 2025 and the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2024-00444170,Research and International Collaboration on Trust Model-Based Intelligent Incident Response Technologies in 6G Open Network Environment).
文摘With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.
基金supported by the National Natural Science Foundation of China(Grant No.62172123)the Key Research and Development Program of Heilongjiang Province,China(GrantNo.2022ZX01A36).
文摘Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated.
基金supported by the National Natural Science Foundation of China (Grant No.62173009)the National Key Research and Development Program of China (Grant No.2021ZD0112302)。
文摘The present study investigates the quest for a fully distributed Nash equilibrium(NE) in networked non-cooperative games, with particular emphasis on actuator limitations. Existing distributed NE seeking approaches often overlook practical input constraints or rely on centralized information. To address these issues, a novel edge-based double-layer adaptive control framework is proposed. Specifically, adaptive scaling parameters are embedded into the edge weights of the communication graph, enabling a fully distributed scheme that avoids dependence on centralized or global knowledge. Every participant modifies its strategy by exclusively utilizing local information and communicating with its neighbors to iteratively approach the NE. By incorporating damping terms into the design of the adaptive parameters, the proposed approach effectively suppresses unbounded parameter growth and consequently guarantees the boundedness of the adaptive gains. In addition, to account for actuator saturation, the proposed distributed NE seeking approach incorporates a saturation function, which ensures that control inputs do not exceed allowable ranges. A rigorous Lyapunov-based analysis guarantees the convergence and boundedness of all system variables. Finally, the presentation of simulation results aims to validate the efficacy and theoretical soundness of the proposed approach.
基金supported by the National Natural Science Foundation of China(Grant No.52339001).
文摘To investigate the damage evolution caused by stress-driven and sub-critical crack propagation within the Beishan granite under multi-creep triaxial compressive conditions,the distributed optical fiber sensing and X-ray computed tomography were combined to obtain the strain distribution over the sample surface and internal fractures of the samples.The Gini and skewness(G-S)coefficients were used to quantify strain localization during tests,where the Gini coefficient reflects the degree of clustering of elements with high strain values,i.e.,strain localization/delocalization.The strain localization-induced asymmetry of data distribution is quantified by the skewness coefficient.A precursor to granite failure is defined by the rapid and simultaneous increase of the G-S coefficients,which are calculated from strain increment,giving an earlier warning of failure by about 8%peak stress than those from absolute strain values.Moreover,the process of damage accumulation due to stress-driven crack propagation in Beishan granite is different at various confining pressures as the stress exceeds the crack initiation stress.Concretely,strain localization is continuous until brittle failure at higher confining pressure,while both strain localization and delocalization occur at lower confining pressure.Despite the different stress conditions,a similar statistical characteristic of strain localization during the creep stage is observed.The Gini coefficient increases,and the skewness coefficient decreases slightly as the creep stress is below 95%peak stress.When the accelerated strain localization begins,the Gini and skewness coefficients increase rapidly and simultaneously.
基金supported by National Key R&D Program of China(No.2022YFB3105101).
文摘With more and more IoT terminals being deployed in various power grid business scenarios,terminal reliability has become a practical challenge that threatens the current security protection architecture.Most IoT terminals have security risks and vulnerabilities,and limited resources make it impossible to deploy costly security protection methods on the terminal.In order to cope with these problems,this paper proposes a lightweight trust evaluation model TCL,which combines three network models,TCN,CNN,and LSTM,with stronger feature extraction capability and can score the reliability of the device by periodically analyzing the traffic behavior and activity logs generated by the terminal device,and the trust evaluation of the terminal’s continuous behavior can be achieved by combining the scores of different periods.After experiments,it is proved that TCL can effectively use the traffic behaviors and activity logs of terminal devices for trust evaluation and achieves F1-score of 95.763,94.456,99.923,and 99.195 on HDFS,BGL,N-BaIoT,and KDD99 datasets,respectively,and the size of TCL is only 91KB,which can achieve similar or better performance than CNN-LSTM,RobustLog and other methods with less computational resources and storage space.
基金supported by a grant(No.CRPG-25-2054)under the Cybersecurity Research and Innovation Pioneers Initiative,provided by the National Cybersecurity Authority(NCA)in the Kingdom of Saudi Arabia.
文摘Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.
基金supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea(NRF-2025S1A5A2A01005171)by the BK21 programat Chungbuk National University(2025).
文摘With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows a multi-step process,beginning with the collection of datasets from different edge devices and network nodes.To verify its effectiveness,experiments were conducted using the CICDoS2017,NSL-KDD,and CICIDS benchmark datasets alongside other existing models.Recursive feature elimination(RFE)with random forest is used to select features from the CICDDoS2019 dataset,on which a BiLSTM model is trained on local nodes.Local models are trained until convergence or stability criteria are met while simultaneously sharing the updates globally for collaborative learning.A centralised server evaluates real-time traffic using the global BiLSTM model,which triggers alerts for potential DDoS attacks.Furthermore,blockchain technology is employed to secure model updates and to provide an immutable audit trail,thereby ensuring trust and accountability among network nodes.This research introduces a novel decentralized method called Federated Random Forest Bidirectional Long Short-Term Memory(FRF-BiLSTM)for detecting DDoS attacks,utilizing the advanced Bidirectional Long Short-Term Memory Networks(BiLSTMs)to analyze sequences in both forward and backward directions.The outcome shows the proposed model achieves a mean accuracy of 97.1%with an average training delay of 88.7 s and testing delay of 21.4 s.The model demonstrates scalability and the best detection performance in large-scale attack scenarios.
文摘In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.
基金supported by the National Natural Science Foundation of China(Grant No.62102449)the Central Plains Talent Program under Grant No.224200510003.
文摘With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become a research hotspot, and the security of the oracle responsible for providing reliable data has attracted much attention. The most widely used centralized oracles in blockchain, such as Provable and Town Crier, all rely on a single oracle to obtain data, which suffers from a single point of failure and limits the large-scale development of blockchain. To this end, the distributed oracle scheme is put forward, but the existing distributed oracle schemes such as Chainlink and Augur generally have low execution efficiency and high communication overhead, which leads to their poor applicability. To solve the above problems, this paper proposes a trusted distributed oracle scheme based on a share recovery threshold signature. First, a data verification method of distributed oracles is designed based on threshold signature. By aggregating the signatures of oracles, data from different data sources can be mutually verified, leading to a more efficient data verification and aggregation process. Then, a credibility-based cluster head election algorithm is designed, which reduces the communication overhead by clarifying the function distribution and building a hierarchical structure. Considering the good performance of the BLS threshold signature in large-scale applications, this paper combines it with distributed oracle technology and proposes a BLS threshold signature algorithm that supports share recovery in distributed oracles. The share recovery mechanism enables the proposed scheme to solve the key loss issue, and the setting of the threshold value enables the proposed scheme to complete signature aggregation with only a threshold number of oracles, making the scheme more robust. Finally, experimental results indicate that, by using the threshold signature technology and the cluster head election algorithm, our scheme effectively improves the execution efficiency of oracles and solves the problem of a single point of failure, leading to higher scalability and robustness.
基金supported by the National Key R&D Program of China(2022ZD0119604)the National Natural Science Foundation of China(NSFC)(62173181,62222308,62221004)the Natural Science Foundation of Jiangsu Province(BK20220139)
文摘This paper designs distributed Nash equilibrium seeking strategies for heterogeneous dynamic cyber-physical systems.In particular, we are concerned with parametric uncertainties in the control channel of the players. Moreover, the weights on communication links can be compromised by time-varying uncertainties, which can result from possibly malicious attacks,faults and disturbances. To deal with the unavailability of measurement of optimization errors, an output observer is constructed,based on which adaptive laws are designed to compensate for physical uncertainties. With adaptive laws, a new distributed Nash equilibrium seeking strategy is designed by further integrating consensus protocols and gradient search algorithms.Moreover, to further accommodate compromised communication weights resulting from cyber-uncertainties, the coupling strengths of the consensus module are designed to be adaptive. As a byproduct, the coupling strengths are independent of any global information. With theoretical investigations, it is proven that the proposed strategies are resilient to these uncertainties and players' actions are convergent to the Nash equilibrium. Simulation examples are given to numerically validate the effectiveness of the proposed strategies.