期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Two-Stage Client Selection Scheme for Blockchain-Enabled Federated Learning in IoT
1
作者 Xiaojun Jin Chao Ma +2 位作者 Song Luo Pengyi Zeng Yifei Wei 《Computers, Materials & Continua》 SCIE EI 2024年第11期2317-2336,共20页
Federated learning enables data owners in the Internet of Things(IoT)to collaborate in training models without sharing private data,creating new business opportunities for building a data market.However,in practical o... Federated learning enables data owners in the Internet of Things(IoT)to collaborate in training models without sharing private data,creating new business opportunities for building a data market.However,in practical operation,there are still some problems with federated learning applications.Blockchain has the characteristics of decentralization,distribution,and security.The blockchain-enabled federated learning further improve the security and performance of model training,while also expanding the application scope of federated learning.Blockchain has natural financial attributes that help establish a federated learning data market.However,the data of federated learning tasks may be distributed across a large number of resource-constrained IoT devices,which have different computing,communication,and storage resources,and the data quality of each device may also vary.Therefore,how to effectively select the clients with the data required for federated learning task is a research hotspot.In this paper,a two-stage client selection scheme for blockchain-enabled federated learning is proposed,which first selects clients that satisfy federated learning task through attribute-based encryption,protecting the attribute privacy of clients.Then blockchain nodes select some clients for local model aggregation by proximal policy optimization algorithm.Experiments show that the model performance of our two-stage client selection scheme is higher than that of other client selection algorithms when some clients are offline and the data quality is poor. 展开更多
关键词 Blockchain federated learning attribute-based encryption client selection proximal policy optimization
在线阅读 下载PDF
Edge Computing-Based Joint Client Selection and Networking Scheme for Federated Learning in Vehicular IoT 被引量:6
2
作者 Wugedele Bao Celimuge Wu +3 位作者 Siri Guleng Jiefang Zhang Kok-Lim Alvin Yau Yusheng Ji 《China Communications》 SCIE CSCD 2021年第6期39-52,共14页
In order to support advanced vehicular Internet-of-Things(IoT)applications,information exchanges among different vehicles are required to find efficient solutions for catering to different application requirements in ... In order to support advanced vehicular Internet-of-Things(IoT)applications,information exchanges among different vehicles are required to find efficient solutions for catering to different application requirements in complex and dynamic vehicular environments.Federated learning(FL),which is a type of distributed learning technology,has been attracting great interest in recent years as it performs knowledge exchange among different network entities without a violation of user privacy.However,client selection and networking scheme for enabling FL in dynamic vehicular environments,which determines the communication delay between FL clients and the central server that aggregates the models received from the clients,is still under-explored.In this paper,we propose an edge computing-based joint client selection and networking scheme for vehicular IoT.The proposed scheme assigns some vehicles as edge vehicles by employing a distributed approach,and uses the edge vehicles as FL clients to conduct the training of local models,which learns optimal behaviors based on the interaction with environments.The clients also work as forwarder nodes in information sharing among network entities.The client selection takes into account the vehicle velocity,vehicle distribution,and the wireless link connectivity between vehicles using a fuzzy logic algorithm,resulting in an efficient learning and networking architecture.We use computer simulations to evaluate the proposed scheme in terms of the communication overhead and the information covered in learning. 展开更多
关键词 vehicular IoT federated learning client selection networking scheme
在线阅读 下载PDF
FedCW: Client Selection with Adaptive Weight in Heterogeneous Federated Learning
3
作者 Haotian Wu Jiaming Pei Jinhai Li 《Computers, Materials & Continua》 2026年第1期1551-1570,共20页
With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy... With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments. 展开更多
关键词 Federated learning non-IID client selection weight allocation vehicular networks
在线阅读 下载PDF
Federated Dynamic Client Selection for Fairness Guarantee in Heterogeneous Edge Computing
4
作者 毛莺池 沈莉娟 +2 位作者 吴俊 平萍 吴杰 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第1期139-158,共20页
Federated learning has emerged as a distributed learning paradigm by training at each client and aggregat-ing at a parameter server.System heterogeneity hinders stragglers from responding to the server in time with hu... Federated learning has emerged as a distributed learning paradigm by training at each client and aggregat-ing at a parameter server.System heterogeneity hinders stragglers from responding to the server in time with huge com-munication costs.Although client grouping in federated learning can solve the straggler problem,the stochastic selection strategy in client grouping neglects the impact of data distribution within each group.Besides,current client grouping ap-proaches make clients suffer unfair participation,leading to biased performances for different clients.In order to guaran-tee the fairness of client participation and mitigate biased local performances,we propose a federated dynamic client selec-tion method based on data representativity(FedSDR).FedSDR clusters clients into groups correlated with their own lo-cal computational efficiency.To estimate the significance of client datasets,we design a novel data representativity evalua-tion scheme based on local data distribution.Furthermore,the two most representative clients in each group are selected to optimize the global model.Finally,the DYNAMIC-SELECT algorithm updates local computational efficiency and data representativity states to regroup clients after periodic average aggregation.Evaluations on real datasets show that FedS-DR improves client participation by 27.4%,37.9%,and 23.3%compared with FedAvg,TiFL,and FedSS,respectively,tak-ing fairness into account in federated learning.In addition,FedSDR surpasses FedAvg,FedGS,and FedMS by 21.32%,20.4%,and 6.90%,respectively,in local test accuracy variance,balancing the performance bias of the global model across clients. 展开更多
关键词 federated learning fairness computational efficiency data distribution client selection client grouping
原文传递
C^(2)S:Class-aware client selection for effective aggregation in federated learning 被引量:1
5
作者 Mei Cao Yujie Zhang +1 位作者 Zezhong Ma Mengying Zhao 《High-Confidence Computing》 2022年第3期7-15,共9页
Federated learning is proposed to train distributed data in a safe manner by avoiding to send data to server.The server maintains a global model and sends it to clients in each communication round,and then aggregates ... Federated learning is proposed to train distributed data in a safe manner by avoiding to send data to server.The server maintains a global model and sends it to clients in each communication round,and then aggregates the updated local models to derive a new global model.Traditionally,the clients are randomly selected in each round and aggregation is based on weighted averaging.Researches show that the performance on IID data is satisfactory while significant accuracy drop can be observed for Non-IID data.In this paper,we explore the reasons and propose a novel aggregation approach for Non-IID data in federated learning.Specifically,we propose to group the clients according to classes of data they have,and select one set in each communication round.Local models from the same set are averaged as usual and the updated global model is sent to next group of clients for further training.In this way,the parameters are only averaged on similar clients and passed among different groups.Evaluation shows that the proposed scheme has advantages in terms of model accuracy and convergence speed with highly unbalanced data distribution and complex models. 展开更多
关键词 Federated learning Non-IID data AGGREGATION client selection
在线阅读 下载PDF
ASCFL:Accurate and Speedy Semi-Supervised Clustering Federated Learning 被引量:3
6
作者 Jingyi He Biyao Gong +3 位作者 Jiadi Yang Hai Wang Pengfei Xu Tianzhang Xing 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第5期823-837,共15页
The influence of non-Independent Identically Distribution(non-IID)data on Federated Learning(FL)has been a serious concern.Clustered Federated Learning(CFL)is an emerging approach for reducing the impact of non-IID da... The influence of non-Independent Identically Distribution(non-IID)data on Federated Learning(FL)has been a serious concern.Clustered Federated Learning(CFL)is an emerging approach for reducing the impact of non-IID data,which employs the client similarity calculated by relevant metrics for clustering.Unfortunately,the existing CFL methods only pursue a single accuracy improvement,but ignore the convergence rate.Additionlly,the designed client selection strategy will affect the clustering results.Finally,traditional semi-supervised learning changes the distribution of data on clients,resulting in higher local costs and undesirable performance.In this paper,we propose a novel CFL method named ASCFL,which selects clients to participate in training and can dynamically adjust the balance between accuracy and convergence speed with datasets consisting of labeled and unlabeled data.To deal with unlabeled data,the prediction labels strategy predicts labels by encoders.The client selection strategy is to improve accuracy and reduce overhead by selecting clients with higher losses participating in the current round.What is more,the similarity-based clustering strategy uses a new indicator to measure the similarity between clients.Experimental results show that ASCFL has certain advantages in model accuracy and convergence speed over the three state-of-the-art methods with two popular datasets. 展开更多
关键词 federated learning clustered federated learning non-Independent Identically Distribution(non-IID)data similarity indicator client selection semi-supervised learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部