Federated learning is a distributed learning framework which trains global models by passing model parameters instead of raw data.However,the training mechanism for passing model parameters is still threatened by grad...Federated learning is a distributed learning framework which trains global models by passing model parameters instead of raw data.However,the training mechanism for passing model parameters is still threatened by gradient inversion,inference attacks,etc.With a lightweight encryption overhead,function encryption is a viable secure aggregation technique in federation learning,which is often used in combination with differential privacy.The function encryption in federal learning still has the following problems:a)Traditional function encryption usually requires a trust third party(TTP)to assign the keys.If a TTP colludes with a server,the security aggregation mechanism can be compromised.b)When using differential privacy in combination with function encryption,the evaluation metrics of incentive mechanisms in the traditional federal learning become invisible.In this paper,we propose a hybrid privacy-preserving scheme for federated learning,called Fed-DFE.Specifically,we present a decentralized multi-client function encryption algorithm.It replaces the TTP in traditional function encryption with an interactive key generation algorithm,avoiding the problem of collusion.Then,an embedded incentive mechanism is designed for function encryption.It models the real parameters in federated learning and finds a balance between privacy preservation and model accuracy.Subsequently,we implemented a prototype of Fed-DFE and evaluated the performance of decentralized function encryption algorithm.The experimental results demonstrate the effectiveness and efficiency of our scheme.展开更多
This paper addresses the problem of global practical stabilization of discrete-time switched affine systems via statedependent switching rules.Several attempts have been made to solve this problem via different types ...This paper addresses the problem of global practical stabilization of discrete-time switched affine systems via statedependent switching rules.Several attempts have been made to solve this problem via different types of a common quadratic Lyapunov function and an ellipsoid.These classical results require either the quadratic Lyapunov function or the employed ellipsoid to be of the centralized type.In some cases,the ellipsoids are defined dependently as the level sets of a decentralized Lyapunov function.In this paper,we extend the existing results by the simultaneous use of a general decentralized Lyapunov function and a decentralized ellipsoid parameterized independently.The proposed conditions provide less conservative results than existing works in the sense of the ultimate invariant set of attraction size.Two different approaches are proposed to extract the ultimate invariant set of attraction with a minimum size,i.e.,a purely numerical method and a numerical-analytical one.In the former,both invariant and attractiveness conditions are imposed to extract the final set of matrix inequalities.The latter is established on a principle that the attractiveness of a set implies its invariance.Thus,the stability conditions are derived based on only the attractiveness property as a set of matrix inequalities with a smaller dimension.Illustrative examples are presented to prove the satisfactory operation of the proposed stabilization methods.展开更多
基金This work was supported in part by the National Key R&D Program of China(No.2018YFB2100400)in part by the National Natural Science Foundation of China(No.62002077,61872100)+2 种基金in part by the China Postdoctoral Science Foundation(No.2020M682657)in part by Guangdong Basic and Applied Basic Research Foundation(No.2020A1515110385)in part by Zhejiang Lab(No.2020NF0AB01),in part by Guangzhou Science and Technology Plan Project(202102010440).
文摘Federated learning is a distributed learning framework which trains global models by passing model parameters instead of raw data.However,the training mechanism for passing model parameters is still threatened by gradient inversion,inference attacks,etc.With a lightweight encryption overhead,function encryption is a viable secure aggregation technique in federation learning,which is often used in combination with differential privacy.The function encryption in federal learning still has the following problems:a)Traditional function encryption usually requires a trust third party(TTP)to assign the keys.If a TTP colludes with a server,the security aggregation mechanism can be compromised.b)When using differential privacy in combination with function encryption,the evaluation metrics of incentive mechanisms in the traditional federal learning become invisible.In this paper,we propose a hybrid privacy-preserving scheme for federated learning,called Fed-DFE.Specifically,we present a decentralized multi-client function encryption algorithm.It replaces the TTP in traditional function encryption with an interactive key generation algorithm,avoiding the problem of collusion.Then,an embedded incentive mechanism is designed for function encryption.It models the real parameters in federated learning and finds a balance between privacy preservation and model accuracy.Subsequently,we implemented a prototype of Fed-DFE and evaluated the performance of decentralized function encryption algorithm.The experimental results demonstrate the effectiveness and efficiency of our scheme.
文摘This paper addresses the problem of global practical stabilization of discrete-time switched affine systems via statedependent switching rules.Several attempts have been made to solve this problem via different types of a common quadratic Lyapunov function and an ellipsoid.These classical results require either the quadratic Lyapunov function or the employed ellipsoid to be of the centralized type.In some cases,the ellipsoids are defined dependently as the level sets of a decentralized Lyapunov function.In this paper,we extend the existing results by the simultaneous use of a general decentralized Lyapunov function and a decentralized ellipsoid parameterized independently.The proposed conditions provide less conservative results than existing works in the sense of the ultimate invariant set of attraction size.Two different approaches are proposed to extract the ultimate invariant set of attraction with a minimum size,i.e.,a purely numerical method and a numerical-analytical one.In the former,both invariant and attractiveness conditions are imposed to extract the final set of matrix inequalities.The latter is established on a principle that the attractiveness of a set implies its invariance.Thus,the stability conditions are derived based on only the attractiveness property as a set of matrix inequalities with a smaller dimension.Illustrative examples are presented to prove the satisfactory operation of the proposed stabilization methods.