With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows...With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows a multi-step process,beginning with the collection of datasets from different edge devices and network nodes.To verify its effectiveness,experiments were conducted using the CICDoS2017,NSL-KDD,and CICIDS benchmark datasets alongside other existing models.Recursive feature elimination(RFE)with random forest is used to select features from the CICDDoS2019 dataset,on which a BiLSTM model is trained on local nodes.Local models are trained until convergence or stability criteria are met while simultaneously sharing the updates globally for collaborative learning.A centralised server evaluates real-time traffic using the global BiLSTM model,which triggers alerts for potential DDoS attacks.Furthermore,blockchain technology is employed to secure model updates and to provide an immutable audit trail,thereby ensuring trust and accountability among network nodes.This research introduces a novel decentralized method called Federated Random Forest Bidirectional Long Short-Term Memory(FRF-BiLSTM)for detecting DDoS attacks,utilizing the advanced Bidirectional Long Short-Term Memory Networks(BiLSTMs)to analyze sequences in both forward and backward directions.The outcome shows the proposed model achieves a mean accuracy of 97.1%with an average training delay of 88.7 s and testing delay of 21.4 s.The model demonstrates scalability and the best detection performance in large-scale attack scenarios.展开更多
Distributed learning is a well-established method for estimation tasks over extensively distributed datasets.However,non-randomly stored data can introduce bias into local parameter estimates,leading to significant pe...Distributed learning is a well-established method for estimation tasks over extensively distributed datasets.However,non-randomly stored data can introduce bias into local parameter estimates,leading to significant performance degradation in classical distributed algorithms.In this paper,the authors propose a novel Distributed Quasi-Newton Pilot(DQNP)method for distributed learning with non-randomly distributed data.The proposed approach accommodates both randomly and non-randomly distributed data settings and imposes no constraints on the uniformity of local sample sizes.Additionally,it avoids the need to transfer the Hessian matrix or compute its inversion,thereby greatly reducing computational and communication complexity.The authors theoretically demonstrate that the resulting estimator achieves statistical efficiency under mild conditions.Extensive numerical experiments on synthetic and real-world data validate the theoretical findings and illustrate the effectiveness of the proposed method.展开更多
The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobil...The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods.展开更多
This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to t...This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.展开更多
Distributed Denial-of-Service(DDoS)attacks pose severe threats to Industrial Control Networks(ICNs),where service disruption can cause significant economic losses and operational risks.Existing signature-based methods...Distributed Denial-of-Service(DDoS)attacks pose severe threats to Industrial Control Networks(ICNs),where service disruption can cause significant economic losses and operational risks.Existing signature-based methods are ineffective against novel attacks,and traditional machine learning models struggle to capture the complex temporal dependencies and dynamic traffic patterns inherent in ICN environments.To address these challenges,this study proposes a deep feature-driven hybrid framework that integrates Transformer,BiLSTM,and KNN to achieve accurate and robust DDoS detection.The Transformer component extracts global temporal dependencies from network traffic flows,while BiLSTM captures fine-grained sequential dynamics.The learned embeddings are then classified using an instance-based KNN layer,enhancing decision boundary precision.This cascaded architecture balances feature abstraction and locality preservation,improving both generalization and robustness.The proposed approach was evaluated on a newly collected real-time ICN traffic dataset and further validated using the public CIC-IDS2017 and Edge-IIoT datasets to demonstrate generalization.Comprehensive metrics including accuracy,precision,recall,F1-score,ROC-AUC,PR-AUC,false positive rate(FPR),and detection latency were employed.Results show that the hybrid framework achieves 98.42%accuracy with an ROC-AUC of 0.992 and FPR below 1%,outperforming baseline machine learning and deep learning models.Robustness experiments under Gaussian noise perturbations confirmed stable performance with less than 2%accuracy degradation.Moreover,detection latency remained below 2.1 ms per sample,indicating suitability for real-time ICS deployment.In summary,the proposed hybrid temporal learning and instance-based classification model offers a scalable and effective solution for DDoS detection in industrial control environments.By combining global contextual modeling,sequential learning,and instance-based refinement,the framework demonstrates strong adaptability across datasets and resilience against noise,providing practical utility for safeguarding critical infrastructure.展开更多
Dear Editor,This letter addresses the challenge of achieving robust global coordination in multi-agent systems(MASs)subject to heterogeneous actuator saturation and additive input disturbances.We develop a novel distr...Dear Editor,This letter addresses the challenge of achieving robust global coordination in multi-agent systems(MASs)subject to heterogeneous actuator saturation and additive input disturbances.We develop a novel distributed control framework that strategically integrates a redesigned saturation function to handle the nonlinear actuator constraint and a high-gain feedback mechanism for effective disturbance rejection.展开更多
Distributed Denial of Service(DDoS)attacks are one of the severe threats to network infrastructure,sometimes bypassing traditional diagnosis algorithms because of their evolving complexity.PresentMachine Learning(ML)t...Distributed Denial of Service(DDoS)attacks are one of the severe threats to network infrastructure,sometimes bypassing traditional diagnosis algorithms because of their evolving complexity.PresentMachine Learning(ML)techniques for DDoS attack diagnosis normally apply network traffic statistical features such as packet sizes and inter-arrival times.However,such techniques sometimes fail to capture complicated relations among various traffic flows.In this paper,we present a new multi-scale ensemble strategy given the Graph Neural Networks(GNNs)for improving DDoS detection.Our technique divides traffic into macro-and micro-level elements,letting various GNN models to get the two corase-scale anomalies and subtle,stealthy attack models.Through modeling network traffic as graph-structured data,GNNs efficiently learn intricate relations among network entities.The proposed ensemble learning algorithm combines the results of several GNNs to improve generalization,robustness,and scalability.Extensive experiments on three benchmark datasets—UNSW-NB15,CICIDS2017,and CICDDoS2019—show that our approach outperforms traditional machine learning and deep learning models in detecting both high-rate and low-rate(stealthy)DDoS attacks,with significant improvements in accuracy and recall.These findings demonstrate the suggested method’s applicability and robustness for real-world implementation in contexts where several DDoS patterns coexist.展开更多
Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and ...Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.展开更多
With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intr...With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.展开更多
In the era of massive data,the study of distributed data is a significant topic.Model averaging can be effectively applied to distributed data by combining information from all machines.For linear models,the model ave...In the era of massive data,the study of distributed data is a significant topic.Model averaging can be effectively applied to distributed data by combining information from all machines.For linear models,the model averaging approach has been developed in the context of distributed data.However,further investigation is needed for more complex models.In this paper,the authors propose a distributed optimal model averaging approach based on multivariate additive models,which approximates unknown functions using B-splines allowing each machine to have a different smoothing degree.To utilize the information from the covariance matrix of dependent errors in multivariate multiple regressions,the authors use the Mahalanobis distance to construct a Mallows-type weight choice criterion.The criterion can be computed by transmitting information between the local machines and the center machine in two steps.The authors demonstrate the asymptotic optimality of the proposed model averaging estimator when the covariates are subject to uncertainty,and obtain the convergence rate of the weight vector to the theoretically optimal weights.The results remain novel even for additive models with a single response variable.The numerical examples show that the proposed method yields good performance.展开更多
With the rapid development of generative artificial intelligence(GenAI),the task of story visualization,which transforms natural language narratives into coherent and consistent image sequences,has attracted growing r...With the rapid development of generative artificial intelligence(GenAI),the task of story visualization,which transforms natural language narratives into coherent and consistent image sequences,has attracted growing research attention.However,existing methods still face limitations in balancing multi-frame character consistency and generation efficiency,which restricts their feasibility for large-scale practical applications.To address this issue,this study proposes a modular cloud-based distributed system built on Stable Diffusion.By separating the character generation and story generation processes,and integratingmulti-feature control techniques,a cachingmechanism,and an asynchronous task queue architecture,the system enhances generation efficiency and scalability.The experimental design includes both automated and human evaluations of character consistency,performance testing,and multinode simulation.The results show that the proposed system outperforms the baseline model StoryGen in both CLIP-I and human evaluation metrics.In terms of performance,under the experimental environment of this study,dual-node deployment reduces average waiting time by approximately 19%,while the four-node simulation further reduces it by up to 65%.Overall,this study demonstrates the advantages of cloud-distributed GenAI in maintaining character consistency and reducing generation latency,highlighting its potential value inmulti-user collaborative story visualization applications.展开更多
The focus of this paper is on distributed average tracking(DAT)in the context of external disturbances,utilizing an event-triggered control mechanism.First,an event-triggered anti-disturbance DAT(ETAD-DAT)algorithm is...The focus of this paper is on distributed average tracking(DAT)in the context of external disturbances,utilizing an event-triggered control mechanism.First,an event-triggered anti-disturbance DAT(ETAD-DAT)algorithm is proposed to reduce communication load in networked control systems by redesigning existing anti-disturbance DAT algorithms and disturbance observers.Furthermore,a fully distributed event-triggering condition is employed to schedule event times for each agent.Simulation results demonstrate that the proposed ETAD-DAT algorithm is able to achieve accurate average tracking of multiple time-varying reference signals despite the presence of external disturbances,while the communication efficiency can be improved obviously.展开更多
The hybrid series-parallel microgrid attracts more attention by combining the advantages of both the series-stacked voltage and parallel-expanded capacity.Low-voltage distributed generations(DGs)are connected in serie...The hybrid series-parallel microgrid attracts more attention by combining the advantages of both the series-stacked voltage and parallel-expanded capacity.Low-voltage distributed generations(DGs)are connected in series to form the intra-string,and then multiple strings are interconnected in parallel.For the existing control strategies,both intra-string and inter-string depend on the centralized or distributed control with high communication reliance.It has limited scalability and redundancy under abnormal conditions.Alternatively,in this study,an intra-string distributed and inter-string decentralized control framework is proposed.Within the string,a few DGs close to the AC bus are the leaders to get the string power information and the rest DGs are the followers to acquire the synchronization information through the droop-based distributed consistency.Specifically,the output of the entire string has the active power−angular frequency(ω-P)droop characteristic,and the decentralized control among strings can be autonomously guaranteed.Moreover,the secondary control is designed to realize multi-mode objectives,including on/off-grid mode switching,grid-connected power interactive management,and off-grid voltage quality regulation.As a result,the proposed method has the ability of plug-and-play capabilities,single-point failure redundancy,and seamless mode-switching.Experimental results are provided to verify the effectiveness of the proposed practical solution.展开更多
Quantile regression(QR)has become an important tool to measure dependence of response variable's quantiles on a number of predictors for heterogeneous data,especially heavy-tailed data and outliers.However,it is q...Quantile regression(QR)has become an important tool to measure dependence of response variable's quantiles on a number of predictors for heterogeneous data,especially heavy-tailed data and outliers.However,it is quite challenging to make statistical inference on distributed high-dimensional QR with missing data due to the distributed nature,sparsity and missingness of data and nondifferentiable quantile loss function.To overcome the challenge,this paper develops a communicationefficient method to select variables and estimate parameters by utilizing a smooth function to approximate the non-differentiable quantile loss function and incorporating the idea of the inverse probability weighting and the penalty function.The proposed approach has three merits.First,it is both computationally and communicationally efficient because only the first-and second-order information of the approximate objective function are communicated at each iteration.Second,the proposed estimators possess the oracle property after a limited number of iterations without constraint on the number of machines.Third,the proposed method simultaneously selects variables and estimates parameters within a distributed framework,ensuring robustness to the specified response probability or propensity score function of the missing data mechanism.Simulation studies and a real example are used to illustrate the effectiveness of the proposed methodologies.展开更多
A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relax...A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.展开更多
Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on ho...Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.展开更多
基金supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea(NRF-2025S1A5A2A01005171)by the BK21 programat Chungbuk National University(2025).
文摘With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows a multi-step process,beginning with the collection of datasets from different edge devices and network nodes.To verify its effectiveness,experiments were conducted using the CICDoS2017,NSL-KDD,and CICIDS benchmark datasets alongside other existing models.Recursive feature elimination(RFE)with random forest is used to select features from the CICDDoS2019 dataset,on which a BiLSTM model is trained on local nodes.Local models are trained until convergence or stability criteria are met while simultaneously sharing the updates globally for collaborative learning.A centralised server evaluates real-time traffic using the global BiLSTM model,which triggers alerts for potential DDoS attacks.Furthermore,blockchain technology is employed to secure model updates and to provide an immutable audit trail,thereby ensuring trust and accountability among network nodes.This research introduces a novel decentralized method called Federated Random Forest Bidirectional Long Short-Term Memory(FRF-BiLSTM)for detecting DDoS attacks,utilizing the advanced Bidirectional Long Short-Term Memory Networks(BiLSTMs)to analyze sequences in both forward and backward directions.The outcome shows the proposed model achieves a mean accuracy of 97.1%with an average training delay of 88.7 s and testing delay of 21.4 s.The model demonstrates scalability and the best detection performance in large-scale attack scenarios.
基金supported by the National Natural Science Foundation of China under Grant No.12271034the Open Fund Project of Key Laboratory of Market Regulation under Grant No.2023SYSKF02003。
文摘Distributed learning is a well-established method for estimation tasks over extensively distributed datasets.However,non-randomly stored data can introduce bias into local parameter estimates,leading to significant performance degradation in classical distributed algorithms.In this paper,the authors propose a novel Distributed Quasi-Newton Pilot(DQNP)method for distributed learning with non-randomly distributed data.The proposed approach accommodates both randomly and non-randomly distributed data settings and imposes no constraints on the uniformity of local sample sizes.Additionally,it avoids the need to transfer the Hessian matrix or compute its inversion,thereby greatly reducing computational and communication complexity.The authors theoretically demonstrate that the resulting estimator achieves statistical efficiency under mild conditions.Extensive numerical experiments on synthetic and real-world data validate the theoretical findings and illustrate the effectiveness of the proposed method.
基金extend their appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2026R760)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors also extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through small group research under grant number RGP2/714/46.
文摘The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods.
基金supported by the National Key Research and Development Program of China(2025YFE0213100)the National Natural Science Foundation of China(62422315,62573348)+1 种基金the Natural Science Basic Research Program of Shaanxi(2025JC-YBMS-667)the“Shuang Yi Liu”Construction Foundation(25GH02010366)。
文摘This paper investigates the distributed continuoustime aggregative optimization problem for second-order multiagent systems,where the local cost function is not only related to its own decision variables,but also to the aggregation of the decision variables of all the agents.By using the gradient descent method,the distributed average tracking(DAT)technique and the time-base generator(TBG)technique,a distributed continuous-time aggregative optimization algorithm is proposed.Subsequently,the optimality of the system's equilibrium point is analyzed,and the convergence of the closed-loop system is proved using the Lyapunov stability theory.Finally,the effectiveness of the proposed algorithm is validated through case studies on multirobot systems and power generation systems.
基金supported by the Extral High Voltage Power Transmission Company,China Southern Power Grid Co.,Ltd.
文摘Distributed Denial-of-Service(DDoS)attacks pose severe threats to Industrial Control Networks(ICNs),where service disruption can cause significant economic losses and operational risks.Existing signature-based methods are ineffective against novel attacks,and traditional machine learning models struggle to capture the complex temporal dependencies and dynamic traffic patterns inherent in ICN environments.To address these challenges,this study proposes a deep feature-driven hybrid framework that integrates Transformer,BiLSTM,and KNN to achieve accurate and robust DDoS detection.The Transformer component extracts global temporal dependencies from network traffic flows,while BiLSTM captures fine-grained sequential dynamics.The learned embeddings are then classified using an instance-based KNN layer,enhancing decision boundary precision.This cascaded architecture balances feature abstraction and locality preservation,improving both generalization and robustness.The proposed approach was evaluated on a newly collected real-time ICN traffic dataset and further validated using the public CIC-IDS2017 and Edge-IIoT datasets to demonstrate generalization.Comprehensive metrics including accuracy,precision,recall,F1-score,ROC-AUC,PR-AUC,false positive rate(FPR),and detection latency were employed.Results show that the hybrid framework achieves 98.42%accuracy with an ROC-AUC of 0.992 and FPR below 1%,outperforming baseline machine learning and deep learning models.Robustness experiments under Gaussian noise perturbations confirmed stable performance with less than 2%accuracy degradation.Moreover,detection latency remained below 2.1 ms per sample,indicating suitability for real-time ICS deployment.In summary,the proposed hybrid temporal learning and instance-based classification model offers a scalable and effective solution for DDoS detection in industrial control environments.By combining global contextual modeling,sequential learning,and instance-based refinement,the framework demonstrates strong adaptability across datasets and resilience against noise,providing practical utility for safeguarding critical infrastructure.
基金supported in part by the National Natural Science Foundation of China(62522313,62473207,U25A20301)the Fundamental Research Funds for the Central Universities(2024SMECP03)。
文摘Dear Editor,This letter addresses the challenge of achieving robust global coordination in multi-agent systems(MASs)subject to heterogeneous actuator saturation and additive input disturbances.We develop a novel distributed control framework that strategically integrates a redesigned saturation function to handle the nonlinear actuator constraint and a high-gain feedback mechanism for effective disturbance rejection.
文摘Distributed Denial of Service(DDoS)attacks are one of the severe threats to network infrastructure,sometimes bypassing traditional diagnosis algorithms because of their evolving complexity.PresentMachine Learning(ML)techniques for DDoS attack diagnosis normally apply network traffic statistical features such as packet sizes and inter-arrival times.However,such techniques sometimes fail to capture complicated relations among various traffic flows.In this paper,we present a new multi-scale ensemble strategy given the Graph Neural Networks(GNNs)for improving DDoS detection.Our technique divides traffic into macro-and micro-level elements,letting various GNN models to get the two corase-scale anomalies and subtle,stealthy attack models.Through modeling network traffic as graph-structured data,GNNs efficiently learn intricate relations among network entities.The proposed ensemble learning algorithm combines the results of several GNNs to improve generalization,robustness,and scalability.Extensive experiments on three benchmark datasets—UNSW-NB15,CICIDS2017,and CICDDoS2019—show that our approach outperforms traditional machine learning and deep learning models in detecting both high-rate and low-rate(stealthy)DDoS attacks,with significant improvements in accuracy and recall.These findings demonstrate the suggested method’s applicability and robustness for real-world implementation in contexts where several DDoS patterns coexist.
基金supported by ScientificResearch Fund of National Health Commission of the People’s Republic of China-Major Science and Technology Program for Medicine and Health in Zhejiang Province(WKJ-ZJ-2406).
文摘Objectives This study aimed to explore the lagged and cumulative effects of risk factors on disability in older adults using distributed lag non-linear models(DLNMs).Methods We utilized data from the China Health and Retirement Longitudinal Study(CHARLS).After feature selection via Elastic Net Regularization,we applied DLNMs to evaluate the lagged effects of risk factors.Disability was defined as the presence of any difficulties in basic activities of daily living(BADL).The cumulative relative risk(CRR)was calculated by summing the lag-specific risk estimates,representing the cumulative disability risk over the specified lag period.Effect modifications and sensitivity analyses were also performed.Results This study included a total of 2,318 participants.Early-phase lag factors,such as the difficulty in stooping(CRR=3.58;95%CI:2.31-5.55;P<0.001)and walking(CRR=2.77;95%CI:1.39-5.55;P<0.001),exerted the strongest effects immediately upon occurrence.Mid-phase lag factors,such as arthritis(CRR=1.51;95%CI:1.10-2.06;P=0.001),showed a resurgence in disability risk within 2-3 years.Late-phase lag factors,including depressive symptoms(CRR=2.38;95%CI:1.30-4.35;P<0.001)and elevated systolic blood pressure(CRR=1.64;95%CI:1.06-2.79;P=0.02),exhibited significant long-term cumulative risks.Conversely,grip strength(CRR=0.80;95%CI:0.54-0.95;P=0.02)and social participation(CRR=0.89;95%CI:0.73-0.99;P=0.04)were significant protective factors.Conclusions The findings underscore the importance of tailored interventions that account for various lag characteristics of different factors to effectively mitigate disability risk.Future studies should explore the underlying biological and sociological mechanisms of these lagged effects,identify intervention strategies that target risk factors with different lagged patterns,and evaluate their effectiveness.
基金supported by the Research year project of the KongjuNational University in 2025 and the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2024-00444170,Research and International Collaboration on Trust Model-Based Intelligent Incident Response Technologies in 6G Open Network Environment).
文摘With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments.
基金supported by Youth Academic Innocation Team Construction project of Capital University of Economics and Business under Grant No.QNTD202303supported by the Beijing Outstanding Young Scientist Program under Grant No.JWZQ20240101027the National Natural Science Foundation of China under Grant Nos.12031016,12531012 and 12426308。
文摘In the era of massive data,the study of distributed data is a significant topic.Model averaging can be effectively applied to distributed data by combining information from all machines.For linear models,the model averaging approach has been developed in the context of distributed data.However,further investigation is needed for more complex models.In this paper,the authors propose a distributed optimal model averaging approach based on multivariate additive models,which approximates unknown functions using B-splines allowing each machine to have a different smoothing degree.To utilize the information from the covariance matrix of dependent errors in multivariate multiple regressions,the authors use the Mahalanobis distance to construct a Mallows-type weight choice criterion.The criterion can be computed by transmitting information between the local machines and the center machine in two steps.The authors demonstrate the asymptotic optimality of the proposed model averaging estimator when the covariates are subject to uncertainty,and obtain the convergence rate of the weight vector to the theoretically optimal weights.The results remain novel even for additive models with a single response variable.The numerical examples show that the proposed method yields good performance.
文摘With the rapid development of generative artificial intelligence(GenAI),the task of story visualization,which transforms natural language narratives into coherent and consistent image sequences,has attracted growing research attention.However,existing methods still face limitations in balancing multi-frame character consistency and generation efficiency,which restricts their feasibility for large-scale practical applications.To address this issue,this study proposes a modular cloud-based distributed system built on Stable Diffusion.By separating the character generation and story generation processes,and integratingmulti-feature control techniques,a cachingmechanism,and an asynchronous task queue architecture,the system enhances generation efficiency and scalability.The experimental design includes both automated and human evaluations of character consistency,performance testing,and multinode simulation.The results show that the proposed system outperforms the baseline model StoryGen in both CLIP-I and human evaluation metrics.In terms of performance,under the experimental environment of this study,dual-node deployment reduces average waiting time by approximately 19%,while the four-node simulation further reduces it by up to 65%.Overall,this study demonstrates the advantages of cloud-distributed GenAI in maintaining character consistency and reducing generation latency,highlighting its potential value inmulti-user collaborative story visualization applications.
基金part supported by the National Natural Science Foundation(62203034,62273126,62203035)the Ling-Yan Research and Development Project of Zhejiang Province of China(2023C03185)。
文摘The focus of this paper is on distributed average tracking(DAT)in the context of external disturbances,utilizing an event-triggered control mechanism.First,an event-triggered anti-disturbance DAT(ETAD-DAT)algorithm is proposed to reduce communication load in networked control systems by redesigning existing anti-disturbance DAT algorithms and disturbance observers.Furthermore,a fully distributed event-triggering condition is employed to schedule event times for each agent.Simulation results demonstrate that the proposed ETAD-DAT algorithm is able to achieve accurate average tracking of multiple time-varying reference signals despite the presence of external disturbances,while the communication efficiency can be improved obviously.
基金supported by the Smart Grid-National Science and Technology Major Project(2025ZD0804500)the National Natural Science Foundation of China under Grant 52307232the Hunan Provincial Natural Science Foundation of China under Grant 2024JJ4055.
文摘The hybrid series-parallel microgrid attracts more attention by combining the advantages of both the series-stacked voltage and parallel-expanded capacity.Low-voltage distributed generations(DGs)are connected in series to form the intra-string,and then multiple strings are interconnected in parallel.For the existing control strategies,both intra-string and inter-string depend on the centralized or distributed control with high communication reliance.It has limited scalability and redundancy under abnormal conditions.Alternatively,in this study,an intra-string distributed and inter-string decentralized control framework is proposed.Within the string,a few DGs close to the AC bus are the leaders to get the string power information and the rest DGs are the followers to acquire the synchronization information through the droop-based distributed consistency.Specifically,the output of the entire string has the active power−angular frequency(ω-P)droop characteristic,and the decentralized control among strings can be autonomously guaranteed.Moreover,the secondary control is designed to realize multi-mode objectives,including on/off-grid mode switching,grid-connected power interactive management,and off-grid voltage quality regulation.As a result,the proposed method has the ability of plug-and-play capabilities,single-point failure redundancy,and seamless mode-switching.Experimental results are provided to verify the effectiveness of the proposed practical solution.
基金supported by the National Key R&D Program of China under Grant No.2022YFA1003701the Open Research Fund of Yunnan Key Laboratory of Statistical Modeling and Data Analysis,Yunnan University under Grant No.SMDAYB2023004。
文摘Quantile regression(QR)has become an important tool to measure dependence of response variable's quantiles on a number of predictors for heterogeneous data,especially heavy-tailed data and outliers.However,it is quite challenging to make statistical inference on distributed high-dimensional QR with missing data due to the distributed nature,sparsity and missingness of data and nondifferentiable quantile loss function.To overcome the challenge,this paper develops a communicationefficient method to select variables and estimate parameters by utilizing a smooth function to approximate the non-differentiable quantile loss function and incorporating the idea of the inverse probability weighting and the penalty function.The proposed approach has three merits.First,it is both computationally and communicationally efficient because only the first-and second-order information of the approximate objective function are communicated at each iteration.Second,the proposed estimators possess the oracle property after a limited number of iterations without constraint on the number of machines.Third,the proposed method simultaneously selects variables and estimates parameters within a distributed framework,ensuring robustness to the specified response probability or propensity score function of the missing data mechanism.Simulation studies and a real example are used to illustrate the effectiveness of the proposed methodologies.
基金support of her postdoctoral research at the GFZ Helmholtz Centre for Geosciences.P.Pan acknowledges the financial support of the National Natural Science Foundation of China(Grant No.52339001)H.Hofmann and Y.Ji acknowledge the financial support of the Helmholtz Association's Initiative and Networking Fund for the Helmholtz Young Investigator Group ARES(contract number VH-NG-1516).
文摘A multi-stage stress relaxation test was performed on a granodiorite sample to understand the deformation process prior to the macroscopic failure of brittle rocks,as well as the transient response during stress relaxation.Distributed optical fiber sensing was used to measure strains across the sample surface by helically wrapping the single-mode fiber around the cylindrical sample.Close agreement was observed between the circumferential strains obtained from the optical fibers and the extensometer.The reconstructed full-field strain contours show strain heterogeneity from the crack closure phase,and the strains in the later deformation phase are dominantly localized within the former high-strain zone.The Gini coefficient was used to quantify the degree of strain localization and shows an initial increase during the crack closure phase,a decrease during the linear elastic phase,and a subsequent increase during the post-yielding phase.This behavior corresponds to a process of initial localization from an imperfect boundary condition,homogenization,and eventual relocalization prior to the macroscopic failure of the sample.The transient strain rate decay during the stress relaxation phase was quantified using the p-value in the“Omori-like"power law function.A higher initial stress at the onset of relaxation results in a lower p-value,indicating a slower strain rate decay.As the sample approaches macroscopic failure,the lowest p-value shifts from the most damaged zone to adjacent areas,suggesting stress redistribution or crack propagation in deformed crystalline rocks under stress relaxation conditions.
文摘Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.