With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy...With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.展开更多
The growing demand for privacy-preserving machine learning has positioned federated learning as a promising research paradigm,enabling the training of high-performance models across distributed data sources without co...The growing demand for privacy-preserving machine learning has positioned federated learning as a promising research paradigm,enabling the training of high-performance models across distributed data sources without compromising user privacy.However,despite its advantages,federated learning faces critical challenges arising from the heterogeneity and volatility of participating clients.In real-world scenarios,variations in client participation,data volume,computational capability,and communication reliability contribute to a highly dynamic training environment,which negatively impacts efficiency and convergence of the model.To address these challenges,this paper proposes a novel client selection method named CDE3.First,CDE3 employs a multidimensional model to comprehensively evaluate clients’contributions.Second,we enhance the classical Exp3 algorithm by incorporating a discount factor that exponentially decays historical contributions,thereby increasing the influence of recent client behavior in the selection process.Furthermore,we provide a theoretical analysis demonstrating a favorable regret bound for the proposed method.Extensive experiments conducted in volatile FL settings validate the effectiveness of CDE3,showing improved convergence speed and model accuracy compared with those of the baseline algorithms.These results confirm that CDE3 effectively mitigates volatility,enhancing the stability and efficiency of federated learning.展开更多
The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client pr...The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.展开更多
文摘With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.
基金funded by the Central University of Finance and Economics,Greater Bay Area Research Institute Project(No.YJY202303)the National Natural Science Foundation of China(No.61906220)the Ministry of Education of Humanities and Social Science Project(No.19YJCZH178).
文摘The growing demand for privacy-preserving machine learning has positioned federated learning as a promising research paradigm,enabling the training of high-performance models across distributed data sources without compromising user privacy.However,despite its advantages,federated learning faces critical challenges arising from the heterogeneity and volatility of participating clients.In real-world scenarios,variations in client participation,data volume,computational capability,and communication reliability contribute to a highly dynamic training environment,which negatively impacts efficiency and convergence of the model.To address these challenges,this paper proposes a novel client selection method named CDE3.First,CDE3 employs a multidimensional model to comprehensively evaluate clients’contributions.Second,we enhance the classical Exp3 algorithm by incorporating a discount factor that exponentially decays historical contributions,thereby increasing the influence of recent client behavior in the selection process.Furthermore,we provide a theoretical analysis demonstrating a favorable regret bound for the proposed method.Extensive experiments conducted in volatile FL settings validate the effectiveness of CDE3,showing improved convergence speed and model accuracy compared with those of the baseline algorithms.These results confirm that CDE3 effectively mitigates volatility,enhancing the stability and efficiency of federated learning.
基金supported by the National Natural Science Foundation of China(Nos.62072411,62372343,62402352,62403500)the Key Research and Development Program of Hubei Province(No.2023BEB024)the Open Fund of Key Laboratory of Social Computing and Cognitive Intelligence(Dalian University of Technology),Ministry of Education(No.SCCI2024TB02).
文摘The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.