The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client pr...The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.展开更多
We study a novel replication mechanism to ensure service continuity against multiple simultaneous server failures.In this mechanism,each item represents a computing task and is replicated intoξ+1 servers for some int...We study a novel replication mechanism to ensure service continuity against multiple simultaneous server failures.In this mechanism,each item represents a computing task and is replicated intoξ+1 servers for some integerξ≥1,with workloads specified by the amount of required resources.If one or more servers fail,the affected workloads can be redirected to other servers that host replicas associated with the same item,such that the service is not interrupted by the failure of up toξservers.This requires that any feasible assignment algorithm must reserve some capacity in each server to accommodate the workload redirected from potential failed servers without overloading,and determining the optimal method for reserving capacity becomes a key issue.Unlike existing algorithms that assume that no two servers share replicas of more than one item,we first formulate capacity reservation for a general arbitrary scenario.Due to the combinatorial nature of this problem,finding the optimal solution is difficult.To this end,we propose a Generalized and Simple Calculating Reserved Capacity(GSCRC)algorithm,with a time complexity only related to the number of items packed in the server.In conjunction with GSCRC,we propose a robust replica packing algorithm with capacity optimization(RobustPack),which aims to minimize the number of servers hosting replicas and tolerate multiple server failures.Through theoretical analysis and experimental evaluations,we show that the RobustPack algorithm can achieve better performance.展开更多
基金supported by the National Natural Science Foundation of China(Nos.62072411,62372343,62402352,62403500)the Key Research and Development Program of Hubei Province(No.2023BEB024)the Open Fund of Key Laboratory of Social Computing and Cognitive Intelligence(Dalian University of Technology),Ministry of Education(No.SCCI2024TB02).
文摘The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.
基金supported in part by the National Key R&D Program of China under No.2023YFB2703800the National Science Foundation of China under Grants U22B2027,62172297,62102262,61902276 and 62272311+3 种基金Tianjin Intelligent Manufacturing Special Fund Project under Grants 20211097the China Guangxi Science and Technology Plan Project(Guangxi Science and Technology Base and Talent Special Project)under Grant AD23026096(Application Number 2022AC20001)Henan Provincial Natural Science Foundation of China under Grant 622RC616CCF-Nsfocus Kunpeng Fund Project under Grants CCF-NSFOCUS202207。
文摘We study a novel replication mechanism to ensure service continuity against multiple simultaneous server failures.In this mechanism,each item represents a computing task and is replicated intoξ+1 servers for some integerξ≥1,with workloads specified by the amount of required resources.If one or more servers fail,the affected workloads can be redirected to other servers that host replicas associated with the same item,such that the service is not interrupted by the failure of up toξservers.This requires that any feasible assignment algorithm must reserve some capacity in each server to accommodate the workload redirected from potential failed servers without overloading,and determining the optimal method for reserving capacity becomes a key issue.Unlike existing algorithms that assume that no two servers share replicas of more than one item,we first formulate capacity reservation for a general arbitrary scenario.Due to the combinatorial nature of this problem,finding the optimal solution is difficult.To this end,we propose a Generalized and Simple Calculating Reserved Capacity(GSCRC)algorithm,with a time complexity only related to the number of items packed in the server.In conjunction with GSCRC,we propose a robust replica packing algorithm with capacity optimization(RobustPack),which aims to minimize the number of servers hosting replicas and tolerate multiple server failures.Through theoretical analysis and experimental evaluations,we show that the RobustPack algorithm can achieve better performance.