期刊文献+
共找到55篇文章
< 1 2 3 >
每页显示 20 50 100
A Comprehensive Evaluation of Distributed Learning Frameworks in AI-Driven Network Intrusion Detection
1
作者 Sooyong Jeong Cheolhee Park +1 位作者 Dowon Hong Changho Seo 《Computers, Materials & Continua》 2026年第4期310-332,共23页
With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intr... With the growing complexity and decentralization of network systems,the attack surface has expanded,which has led to greater concerns over network threats.In this context,artificial intelligence(AI)-based network intrusion detection systems(NIDS)have been extensively studied,and recent efforts have shifted toward integrating distributed learning to enable intelligent and scalable detection mechanisms.However,most existing works focus on individual distributed learning frameworks,and there is a lack of systematic evaluations that compare different algorithms under consistent conditions.In this paper,we present a comprehensive evaluation of representative distributed learning frameworks—Federated Learning(FL),Split Learning(SL),hybrid collaborative learning(SFL),and fully distributed learning—in the context of AI-driven NIDS.Using recent benchmark intrusion detection datasets,a unified model backbone,and controlled distributed scenarios,we assess these frameworks across multiple criteria,including detection performance,communication cost,computational efficiency,and convergence behavior.Our findings highlight distinct trade-offs among the distributed learning frameworks,demonstrating that the optimal choice depends strongly on systemconstraints such as bandwidth availability,node resources,and data distribution.This work provides the first holistic analysis of distributed learning approaches for AI-driven NIDS and offers practical guidelines for designing secure and efficient intrusion detection systems in decentralized environments. 展开更多
关键词 Network intrusion detection network security distributed learning
在线阅读 下载PDF
A structured distributed learning framework for irregular cellular spatial-temporal traffic prediction
2
作者 Xiangyu Chen Kaisa Zhang +4 位作者 Gang Chuai Weidong Gao Xuewen Liu Yibo Zhang Yijian Hou 《Digital Communications and Networks》 2025年第5期1457-1468,共12页
Spatial-temporal traffic prediction technology is crucial for network planning,resource allocation optimizing,and user experience improving.With the development of virtual network operators,multi-operator collaboratio... Spatial-temporal traffic prediction technology is crucial for network planning,resource allocation optimizing,and user experience improving.With the development of virtual network operators,multi-operator collaborations,and edge computing,spatial-temporal traffic data has taken on a distributed nature.Consequently,noncentralized spatial-temporal traffic prediction solutions have emerged as a recent research focus.Currently,the majority of research typically adopts federated learning methods to train traffic prediction models distributed on each base station.This method reduces additional burden on communication systems.However,this method has a drawback:it cannot handle irregular traffic data.Due to unstable wireless network environments,device failures,insufficient storage resources,etc.,data missing inevitably occurs during the process of collecting traffic data.This results in the irregular nature of distributed traffic data.Yet,commonly used traffic prediction models such as Recurrent Neural Networks(RNN)and Long Short-Term Memory(LSTM)typically assume that the data is complete and regular.To address the challenge of handling irregular traffic data,this paper transforms irregular traffic prediction into problems of estimating latent variables and generating future traffic.To solve the aforementioned problems,this paper introduces split learning to design a structured distributed learning framework.The framework comprises a Global-level Spatial structure mining Model(GSM)and several Nodelevel Generative Models(NGMs).NGM and GSM represent Seq2Seq models deployed on the base station and graph neural network models deployed on the cloud or central controller.Firstly,the time embedding layer in NGM establishes the mapping relationship between irregular traffic data and regular latent temporal feature variables.Secondly,GSM collects statistical feature parameters of latent temporal feature variables from various nodes and executes graph embedding for spatial-temporal traffic data.Finally,NGM generates future traffic based on latent temporal and spatial feature variables.The introduction of the time attention mechanism enhances the framework’s capability to handle irregular traffic data.Graph attention network introduces spatially correlated base station traffic feature information into local traffic prediction,which compensates for missing information in local irregular traffic data.The proposed framework effectively addresses the distributed prediction issues of irregular traffic data.By testing on real world datasets,the proposed framework improves traffic prediction accuracy by 35%compared to other commonly used distributed traffic prediction methods. 展开更多
关键词 Network measurement and analysis distributed learning Irregular time series Cellular spatial-temporal traffic Traffic prediction
在线阅读 下载PDF
Serverless distributed learning for smart grid analytics
3
作者 Gang Huang Chao Wu +1 位作者 Yifan Hu Chuangxin Guo 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第8期558-565,共8页
The digitization,informatization,and intelligentization of physical systems require strong support from big data analysis.However,due to restrictions on data security and privacy and concerns about the cost of big dat... The digitization,informatization,and intelligentization of physical systems require strong support from big data analysis.However,due to restrictions on data security and privacy and concerns about the cost of big data collection,transmission,and storage,it is difficult to do data aggregation in real-world power systems,which directly retards the effective implementation of smart grid analytics.Federated learning,an advanced distributed learning method proposed by Google,seems a promising solution to the above issues.Nevertheless,it relies on a server node to complete model aggregation and the framework is limited to scenarios where data are independent and identically distributed.Thus,we here propose a serverless distributed learning platform based on blockchain to solve the above two issues.In the proposed platform,the task of machine learning is performed according to smart contracts,and encrypted models are aggregated via a mechanism of knowledge distillation.Through this proposed method,a server node is no longer required and the learning ability is no longer limited to independent and identically distributed scenarios.Experiments on a public electrical grid dataset will verify the effectiveness of the proposed approach. 展开更多
关键词 smart grid physical system distributed learning artificial intelligence
原文传递
ADC-DL:Communication-Efficient Distributed Learning with Hierarchical Clustering and Adaptive Dataset Condensation
4
作者 Zhipeng Gao Yan Yang +1 位作者 Chen Zhao Zijia Mo 《China Communications》 SCIE CSCD 2022年第12期73-85,共13页
The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized... The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized cloud server is not applicable due to data privacy and communication costs concerns,hindering artificial intelligence from empowering mobile devices.Moreover,these data are not identically and independently distributed(Non-IID)caused by their different context,which will deteriorate the performance of the model.To address these issues,we propose a novel Distributed Learning algorithm based on hierarchical clustering and Adaptive Dataset Condensation,named ADC-DL,which learns a shared model by collecting the synthetic samples generated on each device.To tackle the heterogeneity of data distribution,we propose an entropy topsis comprehensive tiering model for hierarchical clustering,which distinguishes clients in terms of their data characteristics.Subsequently,synthetic dummy samples are generated based on the hierarchical structure utilizing adaptive dataset condensation.The procedure of dataset condensation can be adjusted adaptively according to the tier of the client.Extensive experiments demonstrate that the performance of our ADC-DL is more outstanding in prediction accuracy and communication costs compared with existing algorithms. 展开更多
关键词 distributed learning Non-IID data partition hierarchical clustering adaptive dataset condensation
在线阅读 下载PDF
Communication-Censored Distributed Learning for Stochastic Configuration Networks
5
作者 Yujun Zhou Xiaowen Ge Wu Ai 《International Journal of Intelligence Science》 2022年第2期21-37,共17页
This paper aims to reduce the communication cost of the distributed learning algorithm for stochastic configuration networks (SCNs), in which information exchange between the learning agents is conducted only at a tri... This paper aims to reduce the communication cost of the distributed learning algorithm for stochastic configuration networks (SCNs), in which information exchange between the learning agents is conducted only at a trigger time. For this purpose, we propose the communication-censored distributed learning algorithm for SCN, namely ADMMM-SCN-ET, by introducing the event-triggered communication mechanism to the alternating direction method of multipliers (ADMM). To avoid unnecessary information transmissions, each learning agent is equipped with a trigger function. Only if the event-trigger error exceeds a specified threshold and meets the trigger condition, the agent will transmit the variable information to its neighbors and update its state in time. The simulation results show that the proposed algorithm can effectively reduce the communication cost for training decentralized SCNs and save communication resources. 展开更多
关键词 Event-Triggered Communication distributed learning Stochastic Configuration Networks (SCN) Alternating Direction Method of Multipliers (ADMM)
在线阅读 下载PDF
The adaptive distributed learning based on homomorphic encryption and blockchain 被引量:1
6
作者 YANG Ruizhe ZHAO Xuehui +2 位作者 ZHANG Yanhua SI Pengbo TENG Yinglei 《High Technology Letters》 EI CAS 2022年第4期337-344,共8页
The privacy and security of data are recently research hotspots and challenges.For this issue,an adaptive scheme of distributed learning based on homomorphic encryption and blockchain is proposed.Specifically,in the f... The privacy and security of data are recently research hotspots and challenges.For this issue,an adaptive scheme of distributed learning based on homomorphic encryption and blockchain is proposed.Specifically,in the form of homomorphic encryption,the computing party iteratively aggregates the learning models from distributed participants,so that the privacy of both the data and model is ensured.Moreover,the aggregations are recorded and verified by blockchain,which prevents attacks from malicious nodes and guarantees the reliability of learning.For these sophisticated privacy and security technologies,the computation cost and energy consumption in both the encrypted learning and consensus reaching are analyzed,based on which a joint optimization of computation resources allocation and adaptive aggregation to minimize loss function is established with the realistic solution followed.Finally,the simulations and analysis evaluate the performance of the proposed scheme. 展开更多
关键词 blockchain distributed machine learning(DML) PRIVACY SECURITY
在线阅读 下载PDF
Data complexity-based batch sanitization method against poison in distributed learning
7
作者 Silv Wang Kai Fan +2 位作者 Kuan Zhang Hui Li Yintang Yang 《Digital Communications and Networks》 SCIE CSCD 2024年第2期416-428,共13页
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca... The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios. 展开更多
关键词 distributed machine learning security Federated learning Data poisoning attacks Data sanitization Batch detection Data complexity
在线阅读 下载PDF
Distributed Byzantine-Resilient Learning of Multi-UAV Systems via Filter-Based Centerpoint Aggregation Rules
8
作者 Yukang Cui Linzhen Cheng +1 位作者 Michael Basin Zongze Wu 《IEEE/CAA Journal of Automatica Sinica》 2025年第5期1056-1058,共3页
Dear Editor,Through distributed machine learning,multi-UAV systems can achieve global optimization goals without a centralized server,such as optimal target tracking,by leveraging local calculation and communication w... Dear Editor,Through distributed machine learning,multi-UAV systems can achieve global optimization goals without a centralized server,such as optimal target tracking,by leveraging local calculation and communication with neighbors.In this work,we implement the stochastic gradient descent algorithm(SGD)distributedly to optimize tracking errors based on local state and aggregation of the neighbors'estimation.However,Byzantine agents can mislead neighbors,causing deviations from optimal tracking.We prove that the swarm achieves resilient convergence if aggregated results lie within the normal neighbors'convex hull,which can be guaranteed by the introduced centerpoint-based aggregation rule.In the given simulated scenarios,distributed learning using average,geometric median(GM),and coordinate-wise median(CM)based aggregation rules fail to track the target.Compared to solely using the centerpoint aggregation method,our approach,which combines a pre-filter with the centroid aggregation rule,significantly enhances resilience against Byzantine attacks,achieving faster convergence and smaller tracking errors. 展开更多
关键词 global optimization goals multi UAV systems filter based centerpoint aggregation distributed learning optimal target trackingby stochastic gradient descent algorithm sgd distributedly optimize tracking distributed machine learningmulti uav
在线阅读 下载PDF
A Novel Clustered Distributed Federated Learning Architecture for Tactile Internet of Things Applications in 6G Environment
9
作者 Omar Alnajar Ahmed Barnawi 《Computer Modeling in Engineering & Sciences》 2025年第6期3861-3897,共37页
The Tactile Internet of Things(TIoT)promises transformative applications—ranging from remote surgery to industrial robotics—by incorporating haptic feedback into traditional IoT systems.Yet TIoT’s stringent require... The Tactile Internet of Things(TIoT)promises transformative applications—ranging from remote surgery to industrial robotics—by incorporating haptic feedback into traditional IoT systems.Yet TIoT’s stringent requirements for ultra-low latency,high reliability,and robust privacy present significant challenges.Conventional centralized Federated Learning(FL)architectures struggle with latency and privacy constraints,while fully distributed FL(DFL)faces scalability and non-IID data issues as client populations expand and datasets become increasingly heterogeneous.To address these limitations,we propose a Clustered Distributed Federated Learning(CDFL)architecture tailored for a 6G-enabled TIoT environment.Clients are grouped into clusters based on data similarity and/or geographical proximity,enabling local intra-cluster aggregation before inter-cluster model sharing.This hierarchical,peer-to-peer approach reduces communication overhead,mitigates non-IID effects,and eliminates single points of failure.By offloading aggregation to the network edge and leveraging dynamic clustering,CDFL enhances both computational and communication efficiency.Extensive analysis and simulation demonstrate that CDFL outperforms both centralized FL and DFL as the number of clients grows.Specifically,CDFL demonstrates up to a 30%reduction in training time under highly heterogeneous data distributions,indicating faster convergence.It also reduces communication overhead by approximately 40%compared to DFL.These improvements and enhanced network performance metrics highlight CDFL’s effectiveness for practical TIoT deployments.These results validate CDFL as a scalable,privacy-preserving solution for next-generation TIoT applications. 展开更多
关键词 distributed federated learning Tactile Internet of Things CLUSTERING PEER-TO-PEER
在线阅读 下载PDF
CASBA:Capability-Adaptive Shadow Backdoor Attack against Federated Learning
10
作者 Hongwei Wu Guojian Li +2 位作者 Hanyun Zhang Zi Ye Chao Ma 《Computers, Materials & Continua》 2026年第3期1139-1163,共25页
Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global... Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated. 展开更多
关键词 Federated learning backdoor attack generative adversarial network adaptive attack strategy distributed machine learning
在线阅读 下载PDF
Mitigating Attribute Inference in Split Learning via Channel Pruning and Adversarial Training
11
作者 Afnan Alhindi Saad Al-Ahmadi Mohamed Maher Ben Ismail 《Computers, Materials & Continua》 2026年第3期1465-1489,共25页
Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subn... Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%. 展开更多
关键词 Split learning privacy-preserving split learning distributed collaborative machine learning channel pruning adversarial learning resource-constrained devices
在线阅读 下载PDF
SIGNGD with Error Feedback Meets Lazily Aggregated Technique:Communication-Efficient Algorithms for Distributed Learning 被引量:1
12
作者 Xiaoge Deng Tao Sun +1 位作者 Feng Liu Dongsheng Li 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第1期174-185,共12页
The proliferation of massive datasets has led to significant interests in distributed algorithms for solving large-scale machine learning problems.However,the communication overhead is a major bottleneck that hampers ... The proliferation of massive datasets has led to significant interests in distributed algorithms for solving large-scale machine learning problems.However,the communication overhead is a major bottleneck that hampers the scalability of distributed machine learning systems.In this paper,we design two communication-efficient algorithms for distributed learning tasks.The first one is named EF-SIGNGD,in which we use the 1-bit(sign-based) gradient quantization method to save the communication bits.Moreover,the error feedback technique,i.e.,incorporating the error made by the compression operator into the next step,is employed for the convergence guarantee.The second algorithm is called LE-SIGNGD,in which we introduce a well-designed lazy gradient aggregation rule to EF-SIGNGD that can detect the gradients with small changes and reuse the outdated information.LE-SIGNGD saves communication costs both in transmitted bits and communication rounds.Furthermore,we show that LE-SIGNGD is convergent under some mild assumptions.The effectiveness of the two proposed algorithms is demonstrated through experiments on both real and synthetic data. 展开更多
关键词 distributed learning communication-efficient algorithm convergence analysis
原文传递
On the development of cat swarm metaheuristic using distributed learning strategies and the applications
13
作者 Usha Manasi Mohapatra Babita Majhi Alok Kumar Jagadev 《International Journal of Intelligent Computing and Cybernetics》 EI 2019年第2期224-244,共21页
Purpose–The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems.The proposed algorithms are experimented in this study t... Purpose–The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems.The proposed algorithms are experimented in this study to address problems for which input data are available at different geographic locations.In addition,the models are tested for nonlinear systems with different noise conditions.In a nutshell,the suggested model aims to handle voluminous data with low communication overhead compared to traditional centralized processing methodologies.Design/methodology/approach–Population-based evolutionary algorithms such as genetic algorithm(GA),particle swarm optimization(PSO)and cat swarm optimization(CSO)are implemented in a distributed form to address the system identification problem having distributed input data.Out of different distributed approaches mentioned in the literature,the study has considered incremental and diffusion strategies.Findings–Performances of the proposed distributed learning-based algorithms are compared for different noise conditions.The experimental results indicate that CSO performs better compared to GA and PSO at all noise strengths with respect to accuracy and error convergence rate,but incremental CSO is slightly superior to diffusion CSO.Originality/value–This paper employs evolutionary algorithms using distributed learning strategies and applies these algorithms for the identification of unknown systems.Very few existing studies have been reported in which these distributed learning strategies are experimented for the parameter estimation task. 展开更多
关键词 System identification Wireless sensor network Diffusion learning strategy distributed learning-based cat swarm optimization Incremental learning strategy
在线阅读 下载PDF
Distributed Asynchronous Learning for Multipath Data Transmission Based on P-DDQN 被引量:1
14
作者 Kang Liu Wei Quan +3 位作者 Deyun Gao Chengxiao Yu Mingyuan Liu Yuming Zhang 《China Communications》 SCIE CSCD 2021年第8期62-74,共13页
Adaptive packet scheduling can efficiently enhance the performance of multipath Data Transmission.However,realizing precise packet scheduling is challenging due to the nature of high dynamics and unpredictability of n... Adaptive packet scheduling can efficiently enhance the performance of multipath Data Transmission.However,realizing precise packet scheduling is challenging due to the nature of high dynamics and unpredictability of network link states.To this end,this paper proposes a distributed asynchronous deep reinforcement learning framework to intensify the dynamics and prediction of adaptive packet scheduling.Our framework contains two parts:local asynchronous packet scheduling and distributed cooperative control center.In local asynchronous packet scheduling,an asynchronous prioritized replay double deep Q-learning packets scheduling algorithm is proposed for dynamic adaptive packet scheduling learning,which makes a combination of prioritized replay double deep Q-learning network(P-DDQN)to make the fitting analysis.In distributed cooperative control center,a distributed scheduling learning and neural fitting acceleration algorithm to adaptively update neural network parameters of P-DDQN for more precise packet scheduling.Experimental results show that our solution has a better performance than Random weight algorithm and Round-Robin algorithm in throughput and loss ratio.Further,our solution has 1.32 times and 1.54 times better than Random weight algorithm and Round-Robin algorithm on the stability of multipath data transmission,respectively. 展开更多
关键词 distributed asynchronous learning multipath data transmission deep reinforcement learning
在线阅读 下载PDF
Adaptive Load Balancing for Parameter Servers in Distributed Machine Learning over Heterogeneous Networks 被引量:1
15
作者 CAI Weibo YANG Shulin +2 位作者 SUN Gang ZHANG Qiming YU Hongfang 《ZTE Communications》 2023年第1期72-80,共9页
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network... In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment. 展开更多
关键词 distributed machine learning network awareness parameter server load distribution heterogeneous network
在线阅读 下载PDF
Autonomous Vehicle Platoons In Urban Road Networks:A Joint Distributed Reinforcement Learning and Model Predictive Control Approach
16
作者 Luigi D’Alfonso Francesco Giannini +3 位作者 Giuseppe Franzè Giuseppe Fedele Francesco Pupo Giancarlo Fortino 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期141-156,共16页
In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory... In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors. 展开更多
关键词 distributed model predictive control distributed reinforcement learning routing decisions urban road networks
在线阅读 下载PDF
Pseudo-label based semi-supervised learning in the distributed machine learning framework
17
作者 WANG Xiaoxi WU Wenjun +3 位作者 YANG Feng SI Pengbo ZHANG Xuanyi ZHANG Yanhua 《High Technology Letters》 EI CAS 2022年第2期172-180,共9页
With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in pra... With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in practice.Therefore,distributed machine learning(DML) and semi-supervised learning methods which help solve these problems have been addressed in both academia and industry.In this paper,the semi-supervised learning method and the data parallelism DML framework are combined.The pseudo-label based local loss function for each distributed node is studied,and the stochastic gradient descent(SGD) based distributed parameter update principle is derived.A demo that implements the pseudo-label based semi-supervised learning in the DML framework is conducted,and the CIFAR-10 dataset for target classification is used to evaluate the performance.Experimental results confirm the convergence and the accuracy of the model using the pseudo-label based semi-supervised learning in the DML framework.Given the proportion of the pseudo-label dataset is 20%,the accuracy of the model is over 90% when the value of local parameter update steps between two global aggregations is less than 5.Besides,fixing the global aggregations interval to 3,the model converges with acceptable performance degradation when the proportion of the pseudo-label dataset varies from 20% to 80%. 展开更多
关键词 distributed machine learning(DML) SEMI-SUPERVISED deep neural network(DNN)
在线阅读 下载PDF
The Technological Progress,Applications,and Challenges of Federated Learning
18
作者 Yanling Liu Yun Li 《Proceedings of Business and Economic Studies》 2025年第2期247-252,共6页
With the advent of the era of big data,the exponential growth of data generation has provided unprecedented opportunities for innovation and insight in various fields.However,increasing privacy and security concerns a... With the advent of the era of big data,the exponential growth of data generation has provided unprecedented opportunities for innovation and insight in various fields.However,increasing privacy and security concerns and the existence of the phenomenon of“data silos”limit the collaborative utilization of data.This paper systematically discusses the technological progress of federated learning,including its basic framework,model optimization,communication efficiency improvement,privacy protection mechanism,and integration with other technologies.It then analyzes the broad applications of federated learning in healthcare,the Internet of Things,Internet of Vehicles,smart cities,and financial services,and summarizes its challenges in data heterogeneity,communication overhead,privacy protection,scalability,and security.Finally,this paper looks forward to the future development direction of federated learning and proposes potential research paths in efficient algorithm design,privacy protection mechanism optimization,heterogeneous data processing,and cross-industry collaboration. 展开更多
关键词 Federated learning Data privacy distributed machine learning Heterogeneous data
在线阅读 下载PDF
SatFed:A Resource-Efficient LEO-Satellite-Assisted Heterogeneous Federated Learning Framework
19
作者 Yuxin Zhang Zheng Lin +5 位作者 Zhe Chen Zihan Fang Xianhao Chen Wenjun Zhu Jin Zhao Yue Gao 《Engineering》 2025年第11期115-126,共12页
Traditional federated learning(FL)frameworks rely heavily on terrestrial networks,whose coverage limitations and increasing bandwidth congestion significantly hinder model convergence.Fortunately,the advancement of lo... Traditional federated learning(FL)frameworks rely heavily on terrestrial networks,whose coverage limitations and increasing bandwidth congestion significantly hinder model convergence.Fortunately,the advancement of low-Earth-orbit(LEO)satellite networks offers promising new communication avenues to augment traditional terrestrial FL.Despite this potential,the limited satellite-ground communication bandwidth and the heterogeneous operating environments of ground devices—including variations in data,bandwidth,and computing power—pose substantial challenges for effective and robust satellite-assisted FL.To address these challenges,we propose SatFed,a resource-efficient satellite-assisted heterogeneous FL framework.SatFed implements freshness-based model-prioritization queues to optimize the use of highly constrained satellite-ground bandwidth,ensuring the transmission of the most critical models.Additionally,a multigraph is constructed to capture the real-time heterogeneous relationships between devices,including data distribution,terrestrial bandwidth,and computing capability.This multigraph enables SatFed to aggregate satellite-transmitted models into peer guidance,improving local training in heterogeneous environments.Extensive experiments with real-world LEO satellite networks demonstrate that SatFed achieves superior performance and robustness compared with state-of-the-art benchmarks. 展开更多
关键词 Low-Earth-orbit satellite networks distributed machine learning Federated learning System heterogeneity
在线阅读 下载PDF
Exploring crash induction strategies in within-visual-range air combat based on distributional reinforcement learning
20
作者 Zetian HU Xuefeng LIANG +2 位作者 Jun ZHANG Xiaochuan YOU Chengcheng MA 《Chinese Journal of Aeronautics》 2025年第9期350-364,共15页
Within-Visual-Range(WVR)air combat is a highly dynamic and uncertain domain where effective strategies require intelligent and adaptive decision-making.Traditional approaches,including rule-based methods and conventio... Within-Visual-Range(WVR)air combat is a highly dynamic and uncertain domain where effective strategies require intelligent and adaptive decision-making.Traditional approaches,including rule-based methods and conventional Reinforcement Learning(RL)algorithms,often focus on maximizing engagement outcomes through direct combat superiority.However,these methods overlook alternative tactics,such as inducing adversaries to crash,which can achieve decisive victories with lower risk and cost.This study proposes Alpha Crash,a novel distributional-rein forcement-learning-based agent specifically designed to defeat opponents by leveraging crash induction strategies.The approach integrates an improved QR-DQN framework to address uncertainties and adversarial tactics,incorporating advanced pilot experience into its reward functions.Extensive simulations reveal Alpha Crash's robust performance,achieving a 91.2%win rate across diverse scenarios by effectively guiding opponents into critical errors.Visualization and altitude analyses illustrate the agent's three-stage crash induction strategies that exploit adversaries'vulnerabilities.These findings underscore Alpha Crash's potential to enhance autonomous decision-making and strategic innovation in real-world air combat applications. 展开更多
关键词 Unmanned combat aerial vehicle Decision-making Distributional reinforcement learning Within-visual-range air combat Crash induction strategy
原文传递
上一页 1 2 3 下一页 到第
使用帮助 返回顶部