以夏秋季极端高温山林火灾扑救行动为研究对象,以分布式机器学习为理论基础对任务中气温及机动兵力的建模和预测进行研究。首先提出一种基于联邦平均算法(Federal Average Algorithm,FedAvg)的模型构建方法,从更贴近任务实际、更加精细...以夏秋季极端高温山林火灾扑救行动为研究对象,以分布式机器学习为理论基础对任务中气温及机动兵力的建模和预测进行研究。首先提出一种基于联邦平均算法(Federal Average Algorithm,FedAvg)的模型构建方法,从更贴近任务实际、更加精细的角度对各任务方向的最高气温及机动兵力数量进行定量预测;其次通过引接政府公共资源平台及作战数据库中多区域气温和机动兵力,在各数据客户端不互传数据的情况下,通过聚合不同客户端参数共同训练全局模型达到预测目的,为各数据源无法共享环境下分析数据、使用数据提供理论支撑。展开更多
The increasing data pool in finance sectors forces machine learning(ML)to step into new complications.Banking data has significant financial implications and is confidential.Combining users data from several organizat...The increasing data pool in finance sectors forces machine learning(ML)to step into new complications.Banking data has significant financial implications and is confidential.Combining users data from several organizations for various banking services may result in various intrusions and privacy leakages.As a result,this study employs federated learning(FL)using a flower paradigm to preserve each organization’s privacy while collaborating to build a robust shared global model.However,diverse data distributions in the collaborative training process might result in inadequate model learning and a lack of privacy.To address this issue,the present paper proposes the imple-mentation of Federated Averaging(FedAvg)and Federated Proximal(FedProx)methods in the flower framework,which take advantage of the data locality while training and guaranteeing global convergence.Resultantly improves the privacy of the local models.This analysis used the credit card and Canadian Institute for Cybersecurity Intrusion Detection Evaluation(CICIDS)datasets.Precision,recall,and accuracy as performance indicators to show the efficacy of the proposed strategy using FedAvg and FedProx.The experimental findings suggest that the proposed approach helps to safely use banking data from diverse sources to enhance customer banking services by obtaining accuracy of 99.55%and 83.72%for FedAvg and 99.57%,and 84.63%for FedProx.展开更多
Unprotected gradient exchange in federated learning(FL)systems may lead to gradient leakage-related attacks.CKKS is a promising approximate homomorphic encryption scheme to protect gradients,owing to its unique capabi...Unprotected gradient exchange in federated learning(FL)systems may lead to gradient leakage-related attacks.CKKS is a promising approximate homomorphic encryption scheme to protect gradients,owing to its unique capability of performing operations directly on ciphertexts.However,configuring CKKS security parameters involves a tradeoff between correctness,efficiency,and security.An evaluation gap exists regarding how these parameters impact computational performance.Additionally,the maximum vector length that CKKS can once encrypt,recommended by Homomorphic Encryption Standardization,is 16384,hampers its widespread adoption in FL when encrypting layers with numerous neurons.To protect gradients’privacy in FL systems while maintaining practical performance,we comprehensively analyze the influence of security parameters such as polynomial modulus degree and coefficient modulus on homomorphic operations.Derived from our evaluation findings,we provide a method for selecting the optimal multiplication depth while meeting operational requirements.Then,we introduce an adaptive segmented encryption method tailored for CKKS,circumventing its encryption length constraint and enhancing its processing ability to encrypt neural network models.Finally,we present FedSHE,a privacy-preserving and efficient Federated learning scheme with adaptive Segmented CKKS Homomorphic Encryption.FedSHE is implemented on top of the federated averaging(FedAvg)algorithm and is available at https://github.com/yooop an/FedSHE.Our evaluation results affirm the correctness and effectiveness of our proposed method,demonstrating that FedSHE outperforms existing homomorphic encryption-based federated learning research efforts in terms of model accuracy,computational efficiency,communication cost,and security level.展开更多
Federated learning(FedL)is a machine learning(ML)technique utilized to train deep neural networks(DeepNNs)in a distributed way without the need to share data among the federated training clients.FedL was proposed for ...Federated learning(FedL)is a machine learning(ML)technique utilized to train deep neural networks(DeepNNs)in a distributed way without the need to share data among the federated training clients.FedL was proposed for edge computing and Internet of things(IoT)tasks in which a centralized server was responsible for coordinating and governing the training process.To remove the design limitation implied by the centralized entity,this work proposes two different solutions to decentralize existing FedL algorithms,enabling the application of FedL on networks with arbitrary communication topologies,and thus extending the domain of application of FedL to more complex scenarios and new tasks.Of the two proposed algorithms,one,called FedLCon,is developed based on results from discrete-time weighted average consensus theory and is able to reconstruct the performances of the standard centralized FedL solutions,as also shown by the reported validation tests.展开更多
文摘以夏秋季极端高温山林火灾扑救行动为研究对象,以分布式机器学习为理论基础对任务中气温及机动兵力的建模和预测进行研究。首先提出一种基于联邦平均算法(Federal Average Algorithm,FedAvg)的模型构建方法,从更贴近任务实际、更加精细的角度对各任务方向的最高气温及机动兵力数量进行定量预测;其次通过引接政府公共资源平台及作战数据库中多区域气温和机动兵力,在各数据客户端不互传数据的情况下,通过聚合不同客户端参数共同训练全局模型达到预测目的,为各数据源无法共享环境下分析数据、使用数据提供理论支撑。
文摘The increasing data pool in finance sectors forces machine learning(ML)to step into new complications.Banking data has significant financial implications and is confidential.Combining users data from several organizations for various banking services may result in various intrusions and privacy leakages.As a result,this study employs federated learning(FL)using a flower paradigm to preserve each organization’s privacy while collaborating to build a robust shared global model.However,diverse data distributions in the collaborative training process might result in inadequate model learning and a lack of privacy.To address this issue,the present paper proposes the imple-mentation of Federated Averaging(FedAvg)and Federated Proximal(FedProx)methods in the flower framework,which take advantage of the data locality while training and guaranteeing global convergence.Resultantly improves the privacy of the local models.This analysis used the credit card and Canadian Institute for Cybersecurity Intrusion Detection Evaluation(CICIDS)datasets.Precision,recall,and accuracy as performance indicators to show the efficacy of the proposed strategy using FedAvg and FedProx.The experimental findings suggest that the proposed approach helps to safely use banking data from diverse sources to enhance customer banking services by obtaining accuracy of 99.55%and 83.72%for FedAvg and 99.57%,and 84.63%for FedProx.
文摘Unprotected gradient exchange in federated learning(FL)systems may lead to gradient leakage-related attacks.CKKS is a promising approximate homomorphic encryption scheme to protect gradients,owing to its unique capability of performing operations directly on ciphertexts.However,configuring CKKS security parameters involves a tradeoff between correctness,efficiency,and security.An evaluation gap exists regarding how these parameters impact computational performance.Additionally,the maximum vector length that CKKS can once encrypt,recommended by Homomorphic Encryption Standardization,is 16384,hampers its widespread adoption in FL when encrypting layers with numerous neurons.To protect gradients’privacy in FL systems while maintaining practical performance,we comprehensively analyze the influence of security parameters such as polynomial modulus degree and coefficient modulus on homomorphic operations.Derived from our evaluation findings,we provide a method for selecting the optimal multiplication depth while meeting operational requirements.Then,we introduce an adaptive segmented encryption method tailored for CKKS,circumventing its encryption length constraint and enhancing its processing ability to encrypt neural network models.Finally,we present FedSHE,a privacy-preserving and efficient Federated learning scheme with adaptive Segmented CKKS Homomorphic Encryption.FedSHE is implemented on top of the federated averaging(FedAvg)algorithm and is available at https://github.com/yooop an/FedSHE.Our evaluation results affirm the correctness and effectiveness of our proposed method,demonstrating that FedSHE outperforms existing homomorphic encryption-based federated learning research efforts in terms of model accuracy,computational efficiency,communication cost,and security level.
基金Supported by the Lazio region,in the scope of the project FedMedAI,Regional Operative Prgramme (POR) of the European fund for regional development (FESR) Lazio 2014–2020 (Azione 1.2.1)(No.A0375-2020-36491-23/10/2020)
文摘Federated learning(FedL)is a machine learning(ML)technique utilized to train deep neural networks(DeepNNs)in a distributed way without the need to share data among the federated training clients.FedL was proposed for edge computing and Internet of things(IoT)tasks in which a centralized server was responsible for coordinating and governing the training process.To remove the design limitation implied by the centralized entity,this work proposes two different solutions to decentralize existing FedL algorithms,enabling the application of FedL on networks with arbitrary communication topologies,and thus extending the domain of application of FedL to more complex scenarios and new tasks.Of the two proposed algorithms,one,called FedLCon,is developed based on results from discrete-time weighted average consensus theory and is able to reconstruct the performances of the standard centralized FedL solutions,as also shown by the reported validation tests.