DAB BWS(Digital Audio Broadcasting Broadcast Web Site),是数字广播一种数据业务。用户通过数字广播接收终端便能浏览网页,可查询新闻、天气预报等最新信息。介绍基于PC机的DAB推送业务BWS[2]的软件实现。在一定程度上促进了互联网...DAB BWS(Digital Audio Broadcasting Broadcast Web Site),是数字广播一种数据业务。用户通过数字广播接收终端便能浏览网页,可查询新闻、天气预报等最新信息。介绍基于PC机的DAB推送业务BWS[2]的软件实现。在一定程度上促进了互联网与广播的融合。展开更多
With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy...With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.展开更多
The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client pr...The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.展开更多
文摘With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.
基金supported by the National Natural Science Foundation of China(Nos.62072411,62372343,62402352,62403500)the Key Research and Development Program of Hubei Province(No.2023BEB024)the Open Fund of Key Laboratory of Social Computing and Cognitive Intelligence(Dalian University of Technology),Ministry of Education(No.SCCI2024TB02).
文摘The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.