期刊文献+

基于可学习聚合权重的解析性联邦学习方法

Analytical Federated Learning Method Based on Learnable Aggregation Weights
在线阅读 下载PDF
导出
摘要 联邦学习通过在客户端与参数服务器之间交换模型参数而非原始数据,有效保护了数据隐私安全。然而,随着客户端数量和数据规模的增加,联邦学习仍面临通信开销增加和任务复杂性提升的问题。现有方法通常采用基于客户端本地数据量的权重归一化策略进行模型聚合,在一定程度上降低通信开销,但未充分考虑数据异质性,这可能导致模型过拟合、收敛速度减缓,并加重通信负担。因此,本文提出了一种具有可学习聚合权重的解析性联邦学习算法(Learnable Aggregation Weights and Analytic Federated Learning,LAW-AFL),该算法首先通过引入可学习的收缩因子和相对权重,改进了聚合过程中的权重计算方式,并引入闭式训练范式指导神经网络训练,增强模型在异质性数据下的稳定性和泛化能力;其次通过推导绝对聚合规则,进一步提升了聚合过程的效率和准确性,实现了单周期本地训练,简化了训练流程,同时该算法利用闭式解进行高效聚合,简化了训练流程。实验结果表明,所提出的算法在多个数据集和模型上都显著提高了全局模型的精度和泛化能力,相比较于基线方法,在处理大规模客户端和非独立同分布(Non Independent and Identically Distributed,Non-IID)数据时准确率提高了10%,并在特定实验设置下将全局模型的准确率提升至90%以上,单论训练时间相较于Fed AVG缩短了69.82秒/轮。这证明了LAW-AFL在准确性和鲁棒性方面具有一定的优势,并且大幅度降低了通信成本。 Federated learning protects data privacy by exchanging model parameters rather than raw data between clients and a central server.However,as the number of clients and the volume of data grow,it still faces increasing communication overhead and task complexity.Existing methods typically normalize aggregation weights based on each client’s local data size to reduce communication cost,but they often overlook data heterogeneity,which can lead to overfitting,slower convergence,and greater overall communication burden.To address these issues,we propose Learnable Aggregation Weights and Analytic Federated Learning(LAW-AFL).First,LAW-AFL introduces a learnable shrinkage factor and relative weights to refine the aggregation process,and employs a closed-form training paradigm to guide neural network optimization,thereby enhancing model stability and generalization under heterogeneous data.Second,by deriving an absolute aggregation rule,it further improves aggregation efficiency and accuracy,enables single-pass local training,and simplifies the overall training pipeline through closed-form updates.Extensive experiments on multiple datasets and model architectures show that LAW-AFL significantly improves global model accuracy and generalization.On large-scale,non-IID data,it achieves a 10%increase in accuracy compared to existing methods and exceeds 90%accuracy under specific experimental settings,while reducing per-round training time by 69.82 seconds relative to FedAVG.These results demonstrate that LAW-AFL offers clear advantages in accuracy,robustness,and communication efficiency.
作者 蒋伟进 崔新雨 刘志华 陈伸有 胡佳龙 JIANG Wei-Jin;CUI Xin-Yu;LIU Zhi-Hua;CHEN Shen-You;HU Jia-Long(Department of Computer Science,Hunan University of Technology and Business,Changsha 410205;Xiangjiang Laboratory,Changsha 410205)
出处 《计算机学报》 北大核心 2026年第1期84-108,共25页 Chinese Journal of Computers
基金 国家自然科学基金(61772196) 湖南省自然科学基金(2020JJ4249) 湖南省教育厅科学研究重点项目(24A0446 24A0753) 长沙市社科联哲学社会科学规划课题(2024CSSKKT31)资助。
关键词 联邦学习 可学习聚合权重 闭式训练范式 自动解析技术 泛化能力 通信成本 federated learning learnable aggregation weights closed-form training paradigm automated analytical techniques generalization capability communication cost
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部