With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy...With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.展开更多
The growing demand for privacy-preserving machine learning has positioned federated learning as a promising research paradigm,enabling the training of high-performance models across distributed data sources without co...The growing demand for privacy-preserving machine learning has positioned federated learning as a promising research paradigm,enabling the training of high-performance models across distributed data sources without compromising user privacy.However,despite its advantages,federated learning faces critical challenges arising from the heterogeneity and volatility of participating clients.In real-world scenarios,variations in client participation,data volume,computational capability,and communication reliability contribute to a highly dynamic training environment,which negatively impacts efficiency and convergence of the model.To address these challenges,this paper proposes a novel client selection method named CDE3.First,CDE3 employs a multidimensional model to comprehensively evaluate clients’contributions.Second,we enhance the classical Exp3 algorithm by incorporating a discount factor that exponentially decays historical contributions,thereby increasing the influence of recent client behavior in the selection process.Furthermore,we provide a theoretical analysis demonstrating a favorable regret bound for the proposed method.Extensive experiments conducted in volatile FL settings validate the effectiveness of CDE3,showing improved convergence speed and model accuracy compared with those of the baseline algorithms.These results confirm that CDE3 effectively mitigates volatility,enhancing the stability and efficiency of federated learning.展开更多
The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client pr...The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.展开更多
联邦学习作为解决数据隔离问题的新兴范式,能够在不需要客户端上传原始数据的情况下训练全局模型,有效保护用户隐私。由于客户端数量众多但通信资源有限,只能选择部分客户端参与模型聚合。然而联邦学习系统存在设备异构和数据异质等挑战...联邦学习作为解决数据隔离问题的新兴范式,能够在不需要客户端上传原始数据的情况下训练全局模型,有效保护用户隐私。由于客户端数量众多但通信资源有限,只能选择部分客户端参与模型聚合。然而联邦学习系统存在设备异构和数据异质等挑战,简单的客户端选择策略无法考虑环境的动态特性,会拖慢模型的收敛速度,降低模型性能。考虑到客户端状态的时变,提出了全新的客户端可用性评估指标,建立了多重约束下的联邦学习客户端选择模型,建模为损失最小化问题;将优化问题转化为马尔可夫决策过程,提出了一种基于深度强化学习的联邦学习客户端自适应选择(Adaptive Selection for Clients in Federated Learning based on Deep Reinforcement Learning,ASC-DRL)算法,综合考虑通信延迟、资源消耗及客户端可用性,通过代理服务器与环境之间的持续交互最大化奖励函数,得到最优客户端选择方案。实验结果表明,提出的ASC-DRL算法相比于传统联邦学习算法,在模型精度和训练损失方面有着最高89.2%和99.8%的效果提升,能够有效适应动态环境变化,提升联邦学习整体性能和稳定性。展开更多
文摘With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.
基金funded by the Central University of Finance and Economics,Greater Bay Area Research Institute Project(No.YJY202303)the National Natural Science Foundation of China(No.61906220)the Ministry of Education of Humanities and Social Science Project(No.19YJCZH178).
文摘The growing demand for privacy-preserving machine learning has positioned federated learning as a promising research paradigm,enabling the training of high-performance models across distributed data sources without compromising user privacy.However,despite its advantages,federated learning faces critical challenges arising from the heterogeneity and volatility of participating clients.In real-world scenarios,variations in client participation,data volume,computational capability,and communication reliability contribute to a highly dynamic training environment,which negatively impacts efficiency and convergence of the model.To address these challenges,this paper proposes a novel client selection method named CDE3.First,CDE3 employs a multidimensional model to comprehensively evaluate clients’contributions.Second,we enhance the classical Exp3 algorithm by incorporating a discount factor that exponentially decays historical contributions,thereby increasing the influence of recent client behavior in the selection process.Furthermore,we provide a theoretical analysis demonstrating a favorable regret bound for the proposed method.Extensive experiments conducted in volatile FL settings validate the effectiveness of CDE3,showing improved convergence speed and model accuracy compared with those of the baseline algorithms.These results confirm that CDE3 effectively mitigates volatility,enhancing the stability and efficiency of federated learning.
基金supported by the National Natural Science Foundation of China(Nos.62072411,62372343,62402352,62403500)the Key Research and Development Program of Hubei Province(No.2023BEB024)the Open Fund of Key Laboratory of Social Computing and Cognitive Intelligence(Dalian University of Technology),Ministry of Education(No.SCCI2024TB02).
文摘The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.
文摘联邦学习作为解决数据隔离问题的新兴范式,能够在不需要客户端上传原始数据的情况下训练全局模型,有效保护用户隐私。由于客户端数量众多但通信资源有限,只能选择部分客户端参与模型聚合。然而联邦学习系统存在设备异构和数据异质等挑战,简单的客户端选择策略无法考虑环境的动态特性,会拖慢模型的收敛速度,降低模型性能。考虑到客户端状态的时变,提出了全新的客户端可用性评估指标,建立了多重约束下的联邦学习客户端选择模型,建模为损失最小化问题;将优化问题转化为马尔可夫决策过程,提出了一种基于深度强化学习的联邦学习客户端自适应选择(Adaptive Selection for Clients in Federated Learning based on Deep Reinforcement Learning,ASC-DRL)算法,综合考虑通信延迟、资源消耗及客户端可用性,通过代理服务器与环境之间的持续交互最大化奖励函数,得到最优客户端选择方案。实验结果表明,提出的ASC-DRL算法相比于传统联邦学习算法,在模型精度和训练损失方面有着最高89.2%和99.8%的效果提升,能够有效适应动态环境变化,提升联邦学习整体性能和稳定性。