期刊文献+
共找到45篇文章
< 1 2 3 >
每页显示 20 50 100
Data complexity-based batch sanitization method against poison in distributed learning
1
作者 Silv Wang Kai Fan +2 位作者 Kuan Zhang Hui Li Yintang Yang 《Digital Communications and Networks》 SCIE CSCD 2024年第2期416-428,共13页
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca... The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios. 展开更多
关键词 distributed machine learning security Federated learning Data poisoning attacks Data sanitization Batch detection Data complexity
在线阅读 下载PDF
Serverless distributed learning for smart grid analytics
2
作者 Gang Huang Chao Wu +1 位作者 Yifan Hu Chuangxin Guo 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第8期558-565,共8页
The digitization,informatization,and intelligentization of physical systems require strong support from big data analysis.However,due to restrictions on data security and privacy and concerns about the cost of big dat... The digitization,informatization,and intelligentization of physical systems require strong support from big data analysis.However,due to restrictions on data security and privacy and concerns about the cost of big data collection,transmission,and storage,it is difficult to do data aggregation in real-world power systems,which directly retards the effective implementation of smart grid analytics.Federated learning,an advanced distributed learning method proposed by Google,seems a promising solution to the above issues.Nevertheless,it relies on a server node to complete model aggregation and the framework is limited to scenarios where data are independent and identically distributed.Thus,we here propose a serverless distributed learning platform based on blockchain to solve the above two issues.In the proposed platform,the task of machine learning is performed according to smart contracts,and encrypted models are aggregated via a mechanism of knowledge distillation.Through this proposed method,a server node is no longer required and the learning ability is no longer limited to independent and identically distributed scenarios.Experiments on a public electrical grid dataset will verify the effectiveness of the proposed approach. 展开更多
关键词 smart grid physical system distributed learning artificial intelligence
原文传递
ADC-DL:Communication-Efficient Distributed Learning with Hierarchical Clustering and Adaptive Dataset Condensation
3
作者 Zhipeng Gao Yan Yang +1 位作者 Chen Zhao Zijia Mo 《China Communications》 SCIE CSCD 2022年第12期73-85,共13页
The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized... The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized cloud server is not applicable due to data privacy and communication costs concerns,hindering artificial intelligence from empowering mobile devices.Moreover,these data are not identically and independently distributed(Non-IID)caused by their different context,which will deteriorate the performance of the model.To address these issues,we propose a novel Distributed Learning algorithm based on hierarchical clustering and Adaptive Dataset Condensation,named ADC-DL,which learns a shared model by collecting the synthetic samples generated on each device.To tackle the heterogeneity of data distribution,we propose an entropy topsis comprehensive tiering model for hierarchical clustering,which distinguishes clients in terms of their data characteristics.Subsequently,synthetic dummy samples are generated based on the hierarchical structure utilizing adaptive dataset condensation.The procedure of dataset condensation can be adjusted adaptively according to the tier of the client.Extensive experiments demonstrate that the performance of our ADC-DL is more outstanding in prediction accuracy and communication costs compared with existing algorithms. 展开更多
关键词 distributed learning Non-IID data partition hierarchical clustering adaptive dataset condensation
在线阅读 下载PDF
Communication-Censored Distributed Learning for Stochastic Configuration Networks
4
作者 Yujun Zhou Xiaowen Ge Wu Ai 《International Journal of Intelligence Science》 2022年第2期21-37,共17页
This paper aims to reduce the communication cost of the distributed learning algorithm for stochastic configuration networks (SCNs), in which information exchange between the learning agents is conducted only at a tri... This paper aims to reduce the communication cost of the distributed learning algorithm for stochastic configuration networks (SCNs), in which information exchange between the learning agents is conducted only at a trigger time. For this purpose, we propose the communication-censored distributed learning algorithm for SCN, namely ADMMM-SCN-ET, by introducing the event-triggered communication mechanism to the alternating direction method of multipliers (ADMM). To avoid unnecessary information transmissions, each learning agent is equipped with a trigger function. Only if the event-trigger error exceeds a specified threshold and meets the trigger condition, the agent will transmit the variable information to its neighbors and update its state in time. The simulation results show that the proposed algorithm can effectively reduce the communication cost for training decentralized SCNs and save communication resources. 展开更多
关键词 Event-Triggered Communication distributed learning Stochastic Configuration Networks (SCN) Alternating Direction Method of Multipliers (ADMM)
在线阅读 下载PDF
The adaptive distributed learning based on homomorphic encryption and blockchain 被引量:1
5
作者 YANG Ruizhe ZHAO Xuehui +2 位作者 ZHANG Yanhua SI Pengbo TENG Yinglei 《High Technology Letters》 EI CAS 2022年第4期337-344,共8页
The privacy and security of data are recently research hotspots and challenges.For this issue,an adaptive scheme of distributed learning based on homomorphic encryption and blockchain is proposed.Specifically,in the f... The privacy and security of data are recently research hotspots and challenges.For this issue,an adaptive scheme of distributed learning based on homomorphic encryption and blockchain is proposed.Specifically,in the form of homomorphic encryption,the computing party iteratively aggregates the learning models from distributed participants,so that the privacy of both the data and model is ensured.Moreover,the aggregations are recorded and verified by blockchain,which prevents attacks from malicious nodes and guarantees the reliability of learning.For these sophisticated privacy and security technologies,the computation cost and energy consumption in both the encrypted learning and consensus reaching are analyzed,based on which a joint optimization of computation resources allocation and adaptive aggregation to minimize loss function is established with the realistic solution followed.Finally,the simulations and analysis evaluate the performance of the proposed scheme. 展开更多
关键词 blockchain distributed machine learning(DML) PRIVACY SECURITY
在线阅读 下载PDF
A Novel Clustered Distributed Federated Learning Architecture for Tactile Internet of Things Applications in 6G Environment
6
作者 Omar Alnajar Ahmed Barnawi 《Computer Modeling in Engineering & Sciences》 2025年第6期3861-3897,共37页
The Tactile Internet of Things(TIoT)promises transformative applications—ranging from remote surgery to industrial robotics—by incorporating haptic feedback into traditional IoT systems.Yet TIoT’s stringent require... The Tactile Internet of Things(TIoT)promises transformative applications—ranging from remote surgery to industrial robotics—by incorporating haptic feedback into traditional IoT systems.Yet TIoT’s stringent requirements for ultra-low latency,high reliability,and robust privacy present significant challenges.Conventional centralized Federated Learning(FL)architectures struggle with latency and privacy constraints,while fully distributed FL(DFL)faces scalability and non-IID data issues as client populations expand and datasets become increasingly heterogeneous.To address these limitations,we propose a Clustered Distributed Federated Learning(CDFL)architecture tailored for a 6G-enabled TIoT environment.Clients are grouped into clusters based on data similarity and/or geographical proximity,enabling local intra-cluster aggregation before inter-cluster model sharing.This hierarchical,peer-to-peer approach reduces communication overhead,mitigates non-IID effects,and eliminates single points of failure.By offloading aggregation to the network edge and leveraging dynamic clustering,CDFL enhances both computational and communication efficiency.Extensive analysis and simulation demonstrate that CDFL outperforms both centralized FL and DFL as the number of clients grows.Specifically,CDFL demonstrates up to a 30%reduction in training time under highly heterogeneous data distributions,indicating faster convergence.It also reduces communication overhead by approximately 40%compared to DFL.These improvements and enhanced network performance metrics highlight CDFL’s effectiveness for practical TIoT deployments.These results validate CDFL as a scalable,privacy-preserving solution for next-generation TIoT applications. 展开更多
关键词 distributed federated learning Tactile Internet of Things CLUSTERING PEER-TO-PEER
在线阅读 下载PDF
A structured distributed learning framework for irregular cellular spatial-temporal traffic prediction
7
作者 Xiangyu Chen Kaisa Zhang +4 位作者 Gang Chuai Weidong Gao Xuewen Liu Yibo Zhang Yijian Hou 《Digital Communications and Networks》 2025年第5期1457-1468,共12页
Spatial-temporal traffic prediction technology is crucial for network planning,resource allocation optimizing,and user experience improving.With the development of virtual network operators,multi-operator collaboratio... Spatial-temporal traffic prediction technology is crucial for network planning,resource allocation optimizing,and user experience improving.With the development of virtual network operators,multi-operator collaborations,and edge computing,spatial-temporal traffic data has taken on a distributed nature.Consequently,noncentralized spatial-temporal traffic prediction solutions have emerged as a recent research focus.Currently,the majority of research typically adopts federated learning methods to train traffic prediction models distributed on each base station.This method reduces additional burden on communication systems.However,this method has a drawback:it cannot handle irregular traffic data.Due to unstable wireless network environments,device failures,insufficient storage resources,etc.,data missing inevitably occurs during the process of collecting traffic data.This results in the irregular nature of distributed traffic data.Yet,commonly used traffic prediction models such as Recurrent Neural Networks(RNN)and Long Short-Term Memory(LSTM)typically assume that the data is complete and regular.To address the challenge of handling irregular traffic data,this paper transforms irregular traffic prediction into problems of estimating latent variables and generating future traffic.To solve the aforementioned problems,this paper introduces split learning to design a structured distributed learning framework.The framework comprises a Global-level Spatial structure mining Model(GSM)and several Nodelevel Generative Models(NGMs).NGM and GSM represent Seq2Seq models deployed on the base station and graph neural network models deployed on the cloud or central controller.Firstly,the time embedding layer in NGM establishes the mapping relationship between irregular traffic data and regular latent temporal feature variables.Secondly,GSM collects statistical feature parameters of latent temporal feature variables from various nodes and executes graph embedding for spatial-temporal traffic data.Finally,NGM generates future traffic based on latent temporal and spatial feature variables.The introduction of the time attention mechanism enhances the framework’s capability to handle irregular traffic data.Graph attention network introduces spatially correlated base station traffic feature information into local traffic prediction,which compensates for missing information in local irregular traffic data.The proposed framework effectively addresses the distributed prediction issues of irregular traffic data.By testing on real world datasets,the proposed framework improves traffic prediction accuracy by 35%compared to other commonly used distributed traffic prediction methods. 展开更多
关键词 Network measurement and analysis distributed learning Irregular time series Cellular spatial-temporal traffic Traffic prediction
在线阅读 下载PDF
Autonomous Vehicle Platoons In Urban Road Networks:A Joint Distributed Reinforcement Learning and Model Predictive Control Approach
8
作者 Luigi D’Alfonso Francesco Giannini +3 位作者 Giuseppe Franzè Giuseppe Fedele Francesco Pupo Giancarlo Fortino 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期141-156,共16页
In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory... In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors. 展开更多
关键词 distributed model predictive control distributed reinforcement learning routing decisions urban road networks
在线阅读 下载PDF
The Technological Progress,Applications,and Challenges of Federated Learning
9
作者 Yanling Liu Yun Li 《Proceedings of Business and Economic Studies》 2025年第2期247-252,共6页
With the advent of the era of big data,the exponential growth of data generation has provided unprecedented opportunities for innovation and insight in various fields.However,increasing privacy and security concerns a... With the advent of the era of big data,the exponential growth of data generation has provided unprecedented opportunities for innovation and insight in various fields.However,increasing privacy and security concerns and the existence of the phenomenon of“data silos”limit the collaborative utilization of data.This paper systematically discusses the technological progress of federated learning,including its basic framework,model optimization,communication efficiency improvement,privacy protection mechanism,and integration with other technologies.It then analyzes the broad applications of federated learning in healthcare,the Internet of Things,Internet of Vehicles,smart cities,and financial services,and summarizes its challenges in data heterogeneity,communication overhead,privacy protection,scalability,and security.Finally,this paper looks forward to the future development direction of federated learning and proposes potential research paths in efficient algorithm design,privacy protection mechanism optimization,heterogeneous data processing,and cross-industry collaboration. 展开更多
关键词 Federated learning Data privacy distributed machine learning Heterogeneous data
在线阅读 下载PDF
Exploring crash induction strategies in within-visual-range air combat based on distributional reinforcement learning
10
作者 Zetian HU Xuefeng LIANG +2 位作者 Jun ZHANG Xiaochuan YOU Chengcheng MA 《Chinese Journal of Aeronautics》 2025年第9期350-364,共15页
Within-Visual-Range(WVR)air combat is a highly dynamic and uncertain domain where effective strategies require intelligent and adaptive decision-making.Traditional approaches,including rule-based methods and conventio... Within-Visual-Range(WVR)air combat is a highly dynamic and uncertain domain where effective strategies require intelligent and adaptive decision-making.Traditional approaches,including rule-based methods and conventional Reinforcement Learning(RL)algorithms,often focus on maximizing engagement outcomes through direct combat superiority.However,these methods overlook alternative tactics,such as inducing adversaries to crash,which can achieve decisive victories with lower risk and cost.This study proposes Alpha Crash,a novel distributional-rein forcement-learning-based agent specifically designed to defeat opponents by leveraging crash induction strategies.The approach integrates an improved QR-DQN framework to address uncertainties and adversarial tactics,incorporating advanced pilot experience into its reward functions.Extensive simulations reveal Alpha Crash's robust performance,achieving a 91.2%win rate across diverse scenarios by effectively guiding opponents into critical errors.Visualization and altitude analyses illustrate the agent's three-stage crash induction strategies that exploit adversaries'vulnerabilities.These findings underscore Alpha Crash's potential to enhance autonomous decision-making and strategic innovation in real-world air combat applications. 展开更多
关键词 Unmanned combat aerial vehicle Decision-making Distributional reinforcement learning Within-visual-range air combat Crash induction strategy
原文传递
SIGNGD with Error Feedback Meets Lazily Aggregated Technique:Communication-Efficient Algorithms for Distributed Learning 被引量:1
11
作者 Xiaoge Deng Tao Sun +1 位作者 Feng Liu Dongsheng Li 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第1期174-185,共12页
The proliferation of massive datasets has led to significant interests in distributed algorithms for solving large-scale machine learning problems.However,the communication overhead is a major bottleneck that hampers ... The proliferation of massive datasets has led to significant interests in distributed algorithms for solving large-scale machine learning problems.However,the communication overhead is a major bottleneck that hampers the scalability of distributed machine learning systems.In this paper,we design two communication-efficient algorithms for distributed learning tasks.The first one is named EF-SIGNGD,in which we use the 1-bit(sign-based) gradient quantization method to save the communication bits.Moreover,the error feedback technique,i.e.,incorporating the error made by the compression operator into the next step,is employed for the convergence guarantee.The second algorithm is called LE-SIGNGD,in which we introduce a well-designed lazy gradient aggregation rule to EF-SIGNGD that can detect the gradients with small changes and reuse the outdated information.LE-SIGNGD saves communication costs both in transmitted bits and communication rounds.Furthermore,we show that LE-SIGNGD is convergent under some mild assumptions.The effectiveness of the two proposed algorithms is demonstrated through experiments on both real and synthetic data. 展开更多
关键词 distributed learning communication-efficient algorithm convergence analysis
原文传递
On the development of cat swarm metaheuristic using distributed learning strategies and the applications
12
作者 Usha Manasi Mohapatra Babita Majhi Alok Kumar Jagadev 《International Journal of Intelligent Computing and Cybernetics》 EI 2019年第2期224-244,共21页
Purpose–The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems.The proposed algorithms are experimented in this study t... Purpose–The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems.The proposed algorithms are experimented in this study to address problems for which input data are available at different geographic locations.In addition,the models are tested for nonlinear systems with different noise conditions.In a nutshell,the suggested model aims to handle voluminous data with low communication overhead compared to traditional centralized processing methodologies.Design/methodology/approach–Population-based evolutionary algorithms such as genetic algorithm(GA),particle swarm optimization(PSO)and cat swarm optimization(CSO)are implemented in a distributed form to address the system identification problem having distributed input data.Out of different distributed approaches mentioned in the literature,the study has considered incremental and diffusion strategies.Findings–Performances of the proposed distributed learning-based algorithms are compared for different noise conditions.The experimental results indicate that CSO performs better compared to GA and PSO at all noise strengths with respect to accuracy and error convergence rate,but incremental CSO is slightly superior to diffusion CSO.Originality/value–This paper employs evolutionary algorithms using distributed learning strategies and applies these algorithms for the identification of unknown systems.Very few existing studies have been reported in which these distributed learning strategies are experimented for the parameter estimation task. 展开更多
关键词 System identification Wireless sensor network Diffusion learning strategy distributed learning-based cat swarm optimization Incremental learning strategy
在线阅读 下载PDF
Distributed Asynchronous Learning for Multipath Data Transmission Based on P-DDQN 被引量:1
13
作者 Kang Liu Wei Quan +3 位作者 Deyun Gao Chengxiao Yu Mingyuan Liu Yuming Zhang 《China Communications》 SCIE CSCD 2021年第8期62-74,共13页
Adaptive packet scheduling can efficiently enhance the performance of multipath Data Transmission.However,realizing precise packet scheduling is challenging due to the nature of high dynamics and unpredictability of n... Adaptive packet scheduling can efficiently enhance the performance of multipath Data Transmission.However,realizing precise packet scheduling is challenging due to the nature of high dynamics and unpredictability of network link states.To this end,this paper proposes a distributed asynchronous deep reinforcement learning framework to intensify the dynamics and prediction of adaptive packet scheduling.Our framework contains two parts:local asynchronous packet scheduling and distributed cooperative control center.In local asynchronous packet scheduling,an asynchronous prioritized replay double deep Q-learning packets scheduling algorithm is proposed for dynamic adaptive packet scheduling learning,which makes a combination of prioritized replay double deep Q-learning network(P-DDQN)to make the fitting analysis.In distributed cooperative control center,a distributed scheduling learning and neural fitting acceleration algorithm to adaptively update neural network parameters of P-DDQN for more precise packet scheduling.Experimental results show that our solution has a better performance than Random weight algorithm and Round-Robin algorithm in throughput and loss ratio.Further,our solution has 1.32 times and 1.54 times better than Random weight algorithm and Round-Robin algorithm on the stability of multipath data transmission,respectively. 展开更多
关键词 distributed asynchronous learning multipath data transmission deep reinforcement learning
在线阅读 下载PDF
Adaptive Load Balancing for Parameter Servers in Distributed Machine Learning over Heterogeneous Networks 被引量:1
14
作者 CAI Weibo YANG Shulin +2 位作者 SUN Gang ZHANG Qiming YU Hongfang 《ZTE Communications》 2023年第1期72-80,共9页
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network... In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment. 展开更多
关键词 distributed machine learning network awareness parameter server load distribution heterogeneous network
在线阅读 下载PDF
Pseudo-label based semi-supervised learning in the distributed machine learning framework
15
作者 WANG Xiaoxi WU Wenjun +3 位作者 YANG Feng SI Pengbo ZHANG Xuanyi ZHANG Yanhua 《High Technology Letters》 EI CAS 2022年第2期172-180,共9页
With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in pra... With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in practice.Therefore,distributed machine learning(DML) and semi-supervised learning methods which help solve these problems have been addressed in both academia and industry.In this paper,the semi-supervised learning method and the data parallelism DML framework are combined.The pseudo-label based local loss function for each distributed node is studied,and the stochastic gradient descent(SGD) based distributed parameter update principle is derived.A demo that implements the pseudo-label based semi-supervised learning in the DML framework is conducted,and the CIFAR-10 dataset for target classification is used to evaluate the performance.Experimental results confirm the convergence and the accuracy of the model using the pseudo-label based semi-supervised learning in the DML framework.Given the proportion of the pseudo-label dataset is 20%,the accuracy of the model is over 90% when the value of local parameter update steps between two global aggregations is less than 5.Besides,fixing the global aggregations interval to 3,the model converges with acceptable performance degradation when the proportion of the pseudo-label dataset varies from 20% to 80%. 展开更多
关键词 distributed machine learning(DML) SEMI-SUPERVISED deep neural network(DNN)
在线阅读 下载PDF
A Tutorial on Federated Learning from Theory to Practice:Foundations,Software Frameworks,Exemplary Use Cases,and Selected Trends
16
作者 M.Victoria Luzón Nuria Rodríguez-Barroso +5 位作者 Alberto Argente-Garrido Daniel Jiménez-López Jose M.Moyano Javier Del Ser Weiping Ding Francisco Herrera 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期824-850,共27页
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ... When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications. 展开更多
关键词 Data privacy distributed machine learning federated learning software frameworks
在线阅读 下载PDF
Adaptive Kernel Firefly Algorithm Based Feature Selection and Q-Learner Machine Learning Models in Cloud
17
作者 I.Mettildha Mary K.Karuppasamy 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2667-2685,共19页
CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferrin... CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferring information.A dynamic strategy,DevMLOps(Development Machine Learning Operations)used in automatic selections and tunings of MLTs result in significant performance differences.But,the scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.RFEs(Recursive Feature Eliminations)are computationally very expensive in its operations as it traverses through each feature without considering correlations between them.This problem can be overcome by the use of Wrappers as they select better features by accounting for test and train datasets.The aim of this paper is to use DevQLMLOps for automated tuning and selections based on orchestrations and messaging between containers.The proposed AKFA(Adaptive Kernel Firefly Algorithm)is for selecting features for CNM(Cloud Network Monitoring)operations.AKFA methodology is demonstrated using CNSD(Cloud Network Security Dataset)with satisfactory results in the performance metrics like precision,recall,F-measure and accuracy used. 展开更多
关键词 Cloud analytics machine learning ensemble learning distributed learning clustering classification auto selection auto tuning decision feedback cloud DevOps feature selection wrapper feature selection Adaptive Kernel Firefly Algorithm(AKFA) Q learning
在线阅读 下载PDF
RLFreeze:Accelerating Distributed Freezing Training with Reinforcement Learning
18
作者 Zaigang Gong Siyu Chen +3 位作者 Qiangsheng Dai Ying Feng Geng Niu Jinghui Zhang 《Data Intelligence》 2025年第2期416-439,共24页
To achieve better performance,researchers have recently focused on building larger deep learning models,substantially increasing the training costs and prompting the development of distributed training within GPU clus... To achieve better performance,researchers have recently focused on building larger deep learning models,substantially increasing the training costs and prompting the development of distributed training within GPU clusters.However,conventional distributed training approaches suffer from limitations:data parallelism is hindered by excessive memory demands and communication overhead during gradient synchronization,while model parallelism fails to achieve optimal device utilization due to strict computational dependencies.To overcome these challenges,researchers have proposed the concept of hybrid parallelism.By segmenting the model into multiple stages that may internally utilize data parallelism and sequentially processing split training data in a pipeline-like manner across different stages,hybrid parallelism enhances model training speed.However,widely used freezing mechanisms in model fine-tuning,namely canceling gradient computation and weight updates for converged parameters to reduce computational overhead,are yet to be efficiently integrated within hybrid parallel training,failing to strike a balance between speeding up training and guaranteeing accuracy and further reducing the time required for the model to reach a converged state.In this paper,we propose Reinforcement Learning Freeze(RLFreeze),a freezing strategy for distributed DNN training in heterogeneous GPU clusters,especially in hybrid parallelism.We first introduce a mixed freezing criterion based on gradients and gradient variation to accurately freeze converged parameters while minimizing the freezing of unconverged ones.Then,RLFreeze selects the parameters to be frozen according to this criterion and dynamically adjusts the required thresholds for freezing decisions during training using reinforcement learning,achieving a balance between accuracy and accelerated model training.Experimental results demonstrate that RLFreeze can further improve training efficiency in both data parallelism and hybrid parallelism while maintaining model accuracy. 展开更多
关键词 distributed deep learning Hybrid parallelism Freezing training Reinforcement learning Deep Neural Network
原文传递
Finite-Time Distributed Identification for Nonlinear Interconnected Systems 被引量:1
19
作者 Farzaneh Tatari Hamidreza Modares +1 位作者 Christos Panayiotou Marios Polycarpou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第7期1188-1199,共12页
In this paper,a novel finite-time distributed identification method is introduced for nonlinear interconnected systems.A distributed concurrent learning-based discontinuous gradient descent update law is presented to ... In this paper,a novel finite-time distributed identification method is introduced for nonlinear interconnected systems.A distributed concurrent learning-based discontinuous gradient descent update law is presented to learn uncertain interconnected subsystems’dynamics.The concurrent learning approach continually minimizes the identification error for a batch of previously recorded data collected from each subsystem as well as its neighboring subsystems.The state information of neighboring interconnected subsystems is acquired through direct communication.The overall update laws for all subsystems form coupled continuous-time gradient flow dynamics for which finite-time Lyapunov stability analysis is performed.As a byproduct of this Lyapunov analysis,easy-to-check rank conditions on data stored in the distributed memories of subsystems are obtained,under which finite-time stability of the distributed identifier is guaranteed.These rank conditions replace the restrictive persistence of excitation(PE)conditions which are hard and even impossible to achieve and verify for interconnected subsystems.Finally,simulation results verify the effectiveness of the presented distributed method in comparison with the other methods. 展开更多
关键词 distributed concurrent learning finite-time identification nonlinear interconnected systems unknown dynamics
在线阅读 下载PDF
A new accelerating algorithm for multi-agent reinforcement learning 被引量:1
20
作者 张汝波 仲宇 顾国昌 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2005年第1期48-51,共4页
In multi-agent systems, joint-action must be employed to achieve cooperation because the evaluation of the behavior of an agent often depends on the other agents’ behaviors. However, joint-action reinforcement learni... In multi-agent systems, joint-action must be employed to achieve cooperation because the evaluation of the behavior of an agent often depends on the other agents’ behaviors. However, joint-action reinforcement learning algorithms suffer the slow convergence rate because of the enormous learning space produced by joint-action. In this article, a prediction-based reinforcement learning algorithm is presented for multi-agent cooperation tasks, which demands all agents to learn predicting the probabilities of actions that other agents may execute. A multi-robot cooperation experiment is run to test the efficacy of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation policy much faster than the primitive reinforcement learning algorithm. 展开更多
关键词 distributed reinforcement learning accelerating algorithm machine learning multi-agent system
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部