Federated learning has emerged as a popular paradigm for distributed machine learning,enabling participants to collaborate on model training while preserving local data privacy.However,a key challenge in deploying fed...Federated learning has emerged as a popular paradigm for distributed machine learning,enabling participants to collaborate on model training while preserving local data privacy.However,a key challenge in deploying federated learning in real-world applications arises from the substantial heterogeneity in local data distributions across participants.These differences can have negative consequences,such as degraded performance of aggregated models.To address this issue,we propose a novel approach that advocates decomposing the skewed original task into a series of relatively balanced subtasks.Decomposing the task allows us to derive unbiased features extractors for the subtasks,which are then utilized to solve the original task.Based on this concept,we have developed the FedBS algorithm.Through comparative experiments on various datasets,we have demonstrated that FedBS outperforms traditional federated learning algorithms such as FedAvg and FedProx in terms of accuracy,convergence speed,and robustness.The main reason behind these improvements is that FedBS addresses the data heterogeneity problem in federated learning by decomposing the original task into smaller,more balanced subtasks,thereby more effectively mitigating imbalances during model training.展开更多
基金supported by the National Key Research and Development Program of China(2024YFB3311802)the National Natural Science Foundation of China(NSFC)(62172377)+1 种基金the Tai-shan Scholars Program of Shandong province(tsqn202312102)the Startup Research Foundation for Distinguished Scholars(202112016).
文摘Federated learning has emerged as a popular paradigm for distributed machine learning,enabling participants to collaborate on model training while preserving local data privacy.However,a key challenge in deploying federated learning in real-world applications arises from the substantial heterogeneity in local data distributions across participants.These differences can have negative consequences,such as degraded performance of aggregated models.To address this issue,we propose a novel approach that advocates decomposing the skewed original task into a series of relatively balanced subtasks.Decomposing the task allows us to derive unbiased features extractors for the subtasks,which are then utilized to solve the original task.Based on this concept,we have developed the FedBS algorithm.Through comparative experiments on various datasets,we have demonstrated that FedBS outperforms traditional federated learning algorithms such as FedAvg and FedProx in terms of accuracy,convergence speed,and robustness.The main reason behind these improvements is that FedBS addresses the data heterogeneity problem in federated learning by decomposing the original task into smaller,more balanced subtasks,thereby more effectively mitigating imbalances during model training.