Federated learning combined with edge computing has greatly facilitated transportation in real-time applications such as intelligent traffic sys-tems.However,synchronous federated learning is in-efficient in terms of ...Federated learning combined with edge computing has greatly facilitated transportation in real-time applications such as intelligent traffic sys-tems.However,synchronous federated learning is in-efficient in terms of time and convergence speed,mak-ing it unsuitable for high real-time requirements.To address these issues,this paper proposes an Adap-tive Waiting time Asynchronous Federated Learn-ing(AWTAFL)based on Dueling Double Deep Q-Network(D3QN).The server dynamically adjusts the waiting time using the D3QN algorithm based on the current task progress and energy consumption,aim-ing to accelerate convergence and save energy.Addi-tionally,this paper presents a new federated learning global aggregation scheme,where the central server performs weighted aggregation based on the freshness and contribution of client parameters.Experimen-tal simulations demonstrate that the proposed algo-rithm significantly reduces the convergence time while ensuring model quality and effectively reducing en-ergy consumption in asynchronous federated learning.Furthermore,the improved global aggregation update method enhances training stability and reduces oscil-lations in the global model convergence.展开更多
The advancement of the Internet of Things(IoT)brings new opportunities for collecting real-time data and deploying machine learning models.Nonetheless,an individual IoT device may not have adequate computing resources...The advancement of the Internet of Things(IoT)brings new opportunities for collecting real-time data and deploying machine learning models.Nonetheless,an individual IoT device may not have adequate computing resources to train and deploy an entire learning model.At the same time,transmitting continuous real-time data to a central server with high computing resource incurs enormous communication costs and raises issues in data security and privacy.Federated learning,a distributed machine learning framework,is a promising solution to train machine learning models with resource-limited devices and edge servers.Yet,the majority of existing works assume an impractically synchronous parameter update manner with homogeneous IoT nodes under stable communication connections.In this paper,we develop an asynchronous federated learning scheme to improve training efficiency for heterogeneous IoT devices under unstable communication network.Particularly,we formulate an asynchronous federated learning model and develop a lightweight node selection algorithm to carry out learning tasks effectively.The proposed algorithm iteratively selects heterogeneous IoT nodes to participate in the global learning aggregation while considering their local computing resource and communication condition.Extensive experimental results demonstrate that our proposed asynchronous federated learning scheme outperforms the state-of-the-art schemes in various settings on independent and identically distributed(i.i.d.)and non-i.i.d.data distribution.展开更多
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount...In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.展开更多
Asynchronous federated learning(AsynFL)can effectivelymitigate the impact of heterogeneity of edge nodes on joint training while satisfying participant user privacy protection and data security.However,the frequent ex...Asynchronous federated learning(AsynFL)can effectivelymitigate the impact of heterogeneity of edge nodes on joint training while satisfying participant user privacy protection and data security.However,the frequent exchange of massive data can lead to excess communication overhead between edge and central nodes regardless of whether the federated learning(FL)algorithm uses synchronous or asynchronous aggregation.Therefore,there is an urgent need for a method that can simultaneously take into account device heterogeneity and edge node energy consumption reduction.This paper proposes a novel Fixed-point Asynchronous Federated Learning(FixedAsynFL)algorithm,which could mitigate the resource consumption caused by frequent data communication while alleviating the effect of device heterogeneity.FixedAsynFL uses fixed-point quantization to compress the local and global models in AsynFL.In order to balance energy consumption and learning accuracy,this paper proposed a quantization scale selection mechanism.This paper examines the mathematical relationship between the quantization scale and energy consumption of the computation/communication process in the FixedAsynFL.Based on considering the upper bound of quantization noise,this paper optimizes the quantization scale by minimizing communication and computation consumption.This paper performs pertinent experiments on the MNIST dataset with several edge nodes of different computing efficiency.The results show that the FixedAsynFL algorithm with an 8-bit quantization can significantly reduce the communication data size by 81.3%and save the computation energy in the training phase by 74.9%without significant loss of accuracy.According to the experimental results,we can see that the proposed AsynFixedFL algorithm can effectively solve the problem of device heterogeneity and energy consumption limitation of edge nodes.展开更多
The inherent stochasticity of mobile users’(MUs’)request moments and data processing durations,coupled with the aggregated bandwidth constraints during peak moments,often limits the performance of synchronous federa...The inherent stochasticity of mobile users’(MUs’)request moments and data processing durations,coupled with the aggregated bandwidth constraints during peak moments,often limits the performance of synchronous federated learning(FL)systems.To overcome these challenges,this paper proposes an incentive mechanism for asynchronous federated learning(AFL)systems within a Stackelberg game framework.In this model,MUs function as both consumers of communication resources and providers of computational services.Additionally,we derive a closed-form solution for determining the optimal number of local iterations and employ Newton’s method for numerical iteration to optimize the rewards for the cloud network controller(CNC).The numerical results demonstrate the effectiveness of the proposed scheme in enhancing system performance.展开更多
Nowadays,smart wearable devices are used widely in the Social Internet of Things(IoT),which record human physiological data in real time.To protect the data privacy of smart devices,researchers pay more attention to f...Nowadays,smart wearable devices are used widely in the Social Internet of Things(IoT),which record human physiological data in real time.To protect the data privacy of smart devices,researchers pay more attention to federated learning.Although the data leakage problem is somewhat solved,a new challenge has emerged.Asynchronous federated learning shortens the convergence time,while it has time delay and data heterogeneity problems.Both of the two problems harm the accuracy.To overcome these issues,we propose an asynchronous federated learning scheme based on double compensation to solve the problem of time delay and data heterogeneity problems.The scheme improves the Delay Compensated Asynchronous Stochastic Gradient Descent(DC-ASGD)algorithm based on the second-order Taylor expansion as the delay compensation.It adds the FedProx operator to the objective function as the heterogeneity compensation.Besides,the proposed scheme motivates the federated learning process by adjusting the importance of the participants and the central server.We conduct multiple sets of experiments in both conventional and heterogeneous scenarios.The experimental results show that our scheme improves the accuracy by about 5%while keeping the complexity constant.We can find that our scheme converges more smoothly during training and adapts better in heterogeneous environments through numerical experiments.The proposed double-compensation-based federated learning scheme is highly accurate,flexible in terms of participants and smooth the training process.Hence it is deemed suitable for data privacy protection of smart wearable devices.展开更多
基金supported by the National Natural Science Foundation of China(62371082)Guangxi Science and Technology Project(AB24010317)+1 种基金Science and Technology Project of Chongqing Education Commission(KJZD-K202400606)Natural Science Foundation of Chongqing(CSTB2023NSCQ-MSX0726,CSTB2023NSCQ-LZX0014).
文摘Federated learning combined with edge computing has greatly facilitated transportation in real-time applications such as intelligent traffic sys-tems.However,synchronous federated learning is in-efficient in terms of time and convergence speed,mak-ing it unsuitable for high real-time requirements.To address these issues,this paper proposes an Adap-tive Waiting time Asynchronous Federated Learn-ing(AWTAFL)based on Dueling Double Deep Q-Network(D3QN).The server dynamically adjusts the waiting time using the D3QN algorithm based on the current task progress and energy consumption,aim-ing to accelerate convergence and save energy.Addi-tionally,this paper presents a new federated learning global aggregation scheme,where the central server performs weighted aggregation based on the freshness and contribution of client parameters.Experimen-tal simulations demonstrate that the proposed algo-rithm significantly reduces the convergence time while ensuring model quality and effectively reducing en-ergy consumption in asynchronous federated learning.Furthermore,the improved global aggregation update method enhances training stability and reduces oscil-lations in the global model convergence.
文摘The advancement of the Internet of Things(IoT)brings new opportunities for collecting real-time data and deploying machine learning models.Nonetheless,an individual IoT device may not have adequate computing resources to train and deploy an entire learning model.At the same time,transmitting continuous real-time data to a central server with high computing resource incurs enormous communication costs and raises issues in data security and privacy.Federated learning,a distributed machine learning framework,is a promising solution to train machine learning models with resource-limited devices and edge servers.Yet,the majority of existing works assume an impractically synchronous parameter update manner with homogeneous IoT nodes under stable communication connections.In this paper,we develop an asynchronous federated learning scheme to improve training efficiency for heterogeneous IoT devices under unstable communication network.Particularly,we formulate an asynchronous federated learning model and develop a lightweight node selection algorithm to carry out learning tasks effectively.The proposed algorithm iteratively selects heterogeneous IoT nodes to participate in the global learning aggregation while considering their local computing resource and communication condition.Extensive experimental results demonstrate that our proposed asynchronous federated learning scheme outperforms the state-of-the-art schemes in various settings on independent and identically distributed(i.i.d.)and non-i.i.d.data distribution.
基金supported in part by the National Natural Science Foundation of China(No.61701197)in part by the National Key Research and Development Program of China(No.2021YFA1000500(4))in part by the 111 Project(No.B23008).
文摘In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.
基金This work was funded by National Key R&D Program of China(Grant No.2020YFB0906003).
文摘Asynchronous federated learning(AsynFL)can effectivelymitigate the impact of heterogeneity of edge nodes on joint training while satisfying participant user privacy protection and data security.However,the frequent exchange of massive data can lead to excess communication overhead between edge and central nodes regardless of whether the federated learning(FL)algorithm uses synchronous or asynchronous aggregation.Therefore,there is an urgent need for a method that can simultaneously take into account device heterogeneity and edge node energy consumption reduction.This paper proposes a novel Fixed-point Asynchronous Federated Learning(FixedAsynFL)algorithm,which could mitigate the resource consumption caused by frequent data communication while alleviating the effect of device heterogeneity.FixedAsynFL uses fixed-point quantization to compress the local and global models in AsynFL.In order to balance energy consumption and learning accuracy,this paper proposed a quantization scale selection mechanism.This paper examines the mathematical relationship between the quantization scale and energy consumption of the computation/communication process in the FixedAsynFL.Based on considering the upper bound of quantization noise,this paper optimizes the quantization scale by minimizing communication and computation consumption.This paper performs pertinent experiments on the MNIST dataset with several edge nodes of different computing efficiency.The results show that the FixedAsynFL algorithm with an 8-bit quantization can significantly reduce the communication data size by 81.3%and save the computation energy in the training phase by 74.9%without significant loss of accuracy.According to the experimental results,we can see that the proposed AsynFixedFL algorithm can effectively solve the problem of device heterogeneity and energy consumption limitation of edge nodes.
基金supported in part by the Joint Funds for the National Natural Science Foundation of China under Grant U24B20187the Natural Science Foundation on Frontier Leading Technology Basic Research Project of Jiangsu under Grant BK20212001+2 种基金the National Natural Science Foundation of China under Grants 92367302 and 62371250the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 24KJA510008the Natural Science Foundation of Nanjing University of Posts and Telecommunications under Grant NY224113。
文摘The inherent stochasticity of mobile users’(MUs’)request moments and data processing durations,coupled with the aggregated bandwidth constraints during peak moments,often limits the performance of synchronous federated learning(FL)systems.To overcome these challenges,this paper proposes an incentive mechanism for asynchronous federated learning(AFL)systems within a Stackelberg game framework.In this model,MUs function as both consumers of communication resources and providers of computational services.Additionally,we derive a closed-form solution for determining the optimal number of local iterations and employ Newton’s method for numerical iteration to optimize the rewards for the cloud network controller(CNC).The numerical results demonstrate the effectiveness of the proposed scheme in enhancing system performance.
基金supported by the National Natural Science Foundation of China,No.61977006.
文摘Nowadays,smart wearable devices are used widely in the Social Internet of Things(IoT),which record human physiological data in real time.To protect the data privacy of smart devices,researchers pay more attention to federated learning.Although the data leakage problem is somewhat solved,a new challenge has emerged.Asynchronous federated learning shortens the convergence time,while it has time delay and data heterogeneity problems.Both of the two problems harm the accuracy.To overcome these issues,we propose an asynchronous federated learning scheme based on double compensation to solve the problem of time delay and data heterogeneity problems.The scheme improves the Delay Compensated Asynchronous Stochastic Gradient Descent(DC-ASGD)algorithm based on the second-order Taylor expansion as the delay compensation.It adds the FedProx operator to the objective function as the heterogeneity compensation.Besides,the proposed scheme motivates the federated learning process by adjusting the importance of the participants and the central server.We conduct multiple sets of experiments in both conventional and heterogeneous scenarios.The experimental results show that our scheme improves the accuracy by about 5%while keeping the complexity constant.We can find that our scheme converges more smoothly during training and adapts better in heterogeneous environments through numerical experiments.The proposed double-compensation-based federated learning scheme is highly accurate,flexible in terms of participants and smooth the training process.Hence it is deemed suitable for data privacy protection of smart wearable devices.