联邦学习(FL)是一种分布式机器学习方法,即利用分布式数据在训练模型的同时保护数据隐私。然而,它在高度异构的数据分布情况时表现不佳。个性化联邦学习(PFL)通过为每个客户端提供个性化模型来解决这一问题。然而,以往的PFL算法主要侧...联邦学习(FL)是一种分布式机器学习方法,即利用分布式数据在训练模型的同时保护数据隐私。然而,它在高度异构的数据分布情况时表现不佳。个性化联邦学习(PFL)通过为每个客户端提供个性化模型来解决这一问题。然而,以往的PFL算法主要侧重于客户端本地模型的优化,忽略了服务器端全局模型的优化,导致服务器计算资源没有得到充分利用。针对上述局限性,提出基于模型预分配(PA)与自蒸馏(SD)的PFL方法FedPASD。FedPASD从服务器端和客户端2方面入手:在服务器端,对下一轮客户端模型有针对性地预先分配,这样不仅能提高模型的个性化性能,还能有效利用服务器的计算能力;在客户端,经过分层训练,并通过模型自蒸馏微调使模型更好地适应本地数据分布的特点。在3个数据集CIFAR-10、Fashion-MNIST和CIFAR-100上,将FedPASD与FedCP(Federated Conditional Policy)、FedPAC(Personalization with feature Alignment and classifier Collaboration)和FedALA(Federated learning with Adaptive Local Aggregation)等作为基准的典型算法进行对比实验的结果表明:FedPASD在不同异构设置下的测试准确率都高于基准算法。具体而言,FedPASD在CIFAR-100数据集上,客户端数量为50,参与率为50%的实验设置中,测试准确率较传统FL算法提升了29.05~29.22个百分点,较PFL算法提升了1.11~20.99个百分点;在CIFAR-10数据集上最高可达88.54%测试准确率。展开更多
In federated learning(FL),the distribution of data across different clients leads to the degradation of global model performance in training.Personalized Federated Learning(pFL)can address this problem through global ...In federated learning(FL),the distribution of data across different clients leads to the degradation of global model performance in training.Personalized Federated Learning(pFL)can address this problem through global model personalization.Researches over the past few years have calibrated differences in weights across the entire model or optimized only individual layers of the model without considering that different layers of the whole neural network have different utilities,resulting in lagged model convergence and inadequate personalization in non-IID data.In this paper,we propose model layered optimization for feature extractor and classifier(pFedEC),a novel pFL training framework personalized for different layers of the model.Our study divides the model layers into the feature extractor and classifier.We initialize the model's classifiers during model training,while making the local model's feature extractors learn the representation of the global model's feature extractors to correct each client's local training,integrating the utilities of the different layers in the entire model.Our extensive experiments show that pFedEC achieves 92.95%accuracy on CIFAR-10,outperforming existing pFL methods by approximately 1.8%.On CIFAR-100 and Tiny-ImageNet,pFedEC improves the accuracy by at least 4.2%,reaching 73.02%and 28.39%,respectively.展开更多
文摘联邦学习(FL)是一种分布式机器学习方法,即利用分布式数据在训练模型的同时保护数据隐私。然而,它在高度异构的数据分布情况时表现不佳。个性化联邦学习(PFL)通过为每个客户端提供个性化模型来解决这一问题。然而,以往的PFL算法主要侧重于客户端本地模型的优化,忽略了服务器端全局模型的优化,导致服务器计算资源没有得到充分利用。针对上述局限性,提出基于模型预分配(PA)与自蒸馏(SD)的PFL方法FedPASD。FedPASD从服务器端和客户端2方面入手:在服务器端,对下一轮客户端模型有针对性地预先分配,这样不仅能提高模型的个性化性能,还能有效利用服务器的计算能力;在客户端,经过分层训练,并通过模型自蒸馏微调使模型更好地适应本地数据分布的特点。在3个数据集CIFAR-10、Fashion-MNIST和CIFAR-100上,将FedPASD与FedCP(Federated Conditional Policy)、FedPAC(Personalization with feature Alignment and classifier Collaboration)和FedALA(Federated learning with Adaptive Local Aggregation)等作为基准的典型算法进行对比实验的结果表明:FedPASD在不同异构设置下的测试准确率都高于基准算法。具体而言,FedPASD在CIFAR-100数据集上,客户端数量为50,参与率为50%的实验设置中,测试准确率较传统FL算法提升了29.05~29.22个百分点,较PFL算法提升了1.11~20.99个百分点;在CIFAR-10数据集上最高可达88.54%测试准确率。
基金supported in part by the National Natural Science Foundation of China(62472032)the Young Elite Scientists Sponsorship Program by CAST(2023QNRC001)the Fundamental Research Funds for the Central Universities and Research Innovation Project of China University of Political Science and Law(21ZFY52001)。
文摘In federated learning(FL),the distribution of data across different clients leads to the degradation of global model performance in training.Personalized Federated Learning(pFL)can address this problem through global model personalization.Researches over the past few years have calibrated differences in weights across the entire model or optimized only individual layers of the model without considering that different layers of the whole neural network have different utilities,resulting in lagged model convergence and inadequate personalization in non-IID data.In this paper,we propose model layered optimization for feature extractor and classifier(pFedEC),a novel pFL training framework personalized for different layers of the model.Our study divides the model layers into the feature extractor and classifier.We initialize the model's classifiers during model training,while making the local model's feature extractors learn the representation of the global model's feature extractors to correct each client's local training,integrating the utilities of the different layers in the entire model.Our extensive experiments show that pFedEC achieves 92.95%accuracy on CIFAR-10,outperforming existing pFL methods by approximately 1.8%.On CIFAR-100 and Tiny-ImageNet,pFedEC improves the accuracy by at least 4.2%,reaching 73.02%and 28.39%,respectively.