Federated learning(FL)emerged as a novel machine learning setting that enables collaboratively training deep models on decentralized clients with privacy constraints.In the vanilla federated averaging algorithm(FedAvg...Federated learning(FL)emerged as a novel machine learning setting that enables collaboratively training deep models on decentralized clients with privacy constraints.In the vanilla federated averaging algorithm(FedAvg),the global model is generated by the weighted linear combination of local models,and the weights are proportional to the local data sizes.This methodology,however,encounters challenges when facing heterogeneous and unknown client data distributions,often leading to discrepancies from the intended global objective.The linear combination-based aggregation often fails to address the varied dynamics presented by diverse scenarios,settings,and data distributions inherent in FL,resulting in hindered convergence and compromised generalization.In this paper,we present a new aggregation method,FedMcon,within a framework of meta-learning for FL.We introduce a learnable controller trained on a small proxy dataset and served as an aggregator to learn how to adaptively aggregate heterogeneous local models into a better global model toward the desired objective.The experimental results indicate that the proposed method is effective on extremely non-independent and identically distributed data and it can simultaneously reach 19 times communication speedup in a single FL setting.展开更多
As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improveme...As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improvement of all clients;however,the overall performance improvement often sacrifices the performance of certain clients,such as clients with less data.Ignoring fairness may greatly reduce the willingness of some clients to participate in federated learning.In order to solve the above problem,the authors propose Ada-FFL,an adaptive fairness federated aggregation learning algorithm,which can dynamically adjust the fairness coefficient according to the update of the local models,ensuring the convergence performance of the global model and the fairness between federated learning clients.By integrating coarse-grained and fine-grained equity solutions,the authors evaluate the deviation of local models by considering both global equity and individual equity,then the weight ratio will be dynamically allocated for each client based on the evaluated deviation value,which can ensure that the update differences of local models are fully considered in each round of training.Finally,by combining a regularisation term to limit the local model update to be closer to the global model,the sensitivity of the model to input perturbations can be reduced,and the generalisation ability of the global model can be improved.Through numerous experiments on several federal data sets,the authors show that our method has more advantages in convergence effect and fairness than the existing baselines.展开更多
An adaptive weighted stereo matching algorithm with multilevel and bidirectional dynamic programming based on ground control points (GCPs) is presented. To decrease time complexity without losing matching precision,...An adaptive weighted stereo matching algorithm with multilevel and bidirectional dynamic programming based on ground control points (GCPs) is presented. To decrease time complexity without losing matching precision, using a multilevel search scheme, the coarse matching is processed in typical disparity space image, while the fine matching is processed in disparity-offset space image. In the upper level, GCPs are obtained by enhanced volumetric iterative algorithm enforcing the mutual constraint and the threshold constraint. Under the supervision of the highly reliable GCPs, bidirectional dynamic programming framework is employed to solve the inconsistency in the optimization path. In the lower level, to reduce running time, disparity-offset space is proposed to efficiently achieve the dense disparity image. In addition, an adaptive dual support-weight strategy is presented to aggregate matching cost, which considers photometric and geometric information. Further, post-processing algorithm can ameliorate disparity results in areas with depth discontinuities and related by occlusions using dual threshold algorithm, where missing stereo information is substituted from surrounding regions. To demonstrate the effectiveness of the algorithm, we present the two groups of experimental results for four widely used standard stereo data sets, including discussion on performance and comparison with other methods, which show that the algorithm has not only a fast speed, but also significantly improves the efficiency of holistic optimization.展开更多
基金Project supported by the National Key Research and Development Program of China(No.2021ZD0110505)the Zhejiang Provincial Key Research and Development Project,China(No.2023C01043)+2 种基金the National Natural Science Foundation of China(No.62402429)the Key Research and Development Program of Zhejiang Province,China(No.2024C03270)the ZJU Kunpeng&Ascend Center of Excellence,and the Ningbo Yongjiang Talent Introduction Programme,China(No.2023A-397-G)。
文摘Federated learning(FL)emerged as a novel machine learning setting that enables collaboratively training deep models on decentralized clients with privacy constraints.In the vanilla federated averaging algorithm(FedAvg),the global model is generated by the weighted linear combination of local models,and the weights are proportional to the local data sizes.This methodology,however,encounters challenges when facing heterogeneous and unknown client data distributions,often leading to discrepancies from the intended global objective.The linear combination-based aggregation often fails to address the varied dynamics presented by diverse scenarios,settings,and data distributions inherent in FL,resulting in hindered convergence and compromised generalization.In this paper,we present a new aggregation method,FedMcon,within a framework of meta-learning for FL.We introduce a learnable controller trained on a small proxy dataset and served as an aggregator to learn how to adaptively aggregate heterogeneous local models into a better global model toward the desired objective.The experimental results indicate that the proposed method is effective on extremely non-independent and identically distributed data and it can simultaneously reach 19 times communication speedup in a single FL setting.
基金National Natural Science Foundation of China,Grant/Award Number:62272114Joint Research Fund of Guangzhou and University,Grant/Award Number:202201020380+3 种基金Guangdong Higher Education Innovation Group,Grant/Award Number:2020KCXTD007Pearl River Scholars Funding Program of Guangdong Universities(2019)National Key R&D Program of China,Grant/Award Number:2022ZD0119602Major Key Project of PCL,Grant/Award Number:PCL2022A03。
文摘As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improvement of all clients;however,the overall performance improvement often sacrifices the performance of certain clients,such as clients with less data.Ignoring fairness may greatly reduce the willingness of some clients to participate in federated learning.In order to solve the above problem,the authors propose Ada-FFL,an adaptive fairness federated aggregation learning algorithm,which can dynamically adjust the fairness coefficient according to the update of the local models,ensuring the convergence performance of the global model and the fairness between federated learning clients.By integrating coarse-grained and fine-grained equity solutions,the authors evaluate the deviation of local models by considering both global equity and individual equity,then the weight ratio will be dynamically allocated for each client based on the evaluated deviation value,which can ensure that the update differences of local models are fully considered in each round of training.Finally,by combining a regularisation term to limit the local model update to be closer to the global model,the sensitivity of the model to input perturbations can be reduced,and the generalisation ability of the global model can be improved.Through numerous experiments on several federal data sets,the authors show that our method has more advantages in convergence effect and fairness than the existing baselines.
基金supported by the National Natural Science Foundation of China (No.60605023,60775048)Specialized Research Fund for the Doctoral Program of Higher Education (No.20060141006)
文摘An adaptive weighted stereo matching algorithm with multilevel and bidirectional dynamic programming based on ground control points (GCPs) is presented. To decrease time complexity without losing matching precision, using a multilevel search scheme, the coarse matching is processed in typical disparity space image, while the fine matching is processed in disparity-offset space image. In the upper level, GCPs are obtained by enhanced volumetric iterative algorithm enforcing the mutual constraint and the threshold constraint. Under the supervision of the highly reliable GCPs, bidirectional dynamic programming framework is employed to solve the inconsistency in the optimization path. In the lower level, to reduce running time, disparity-offset space is proposed to efficiently achieve the dense disparity image. In addition, an adaptive dual support-weight strategy is presented to aggregate matching cost, which considers photometric and geometric information. Further, post-processing algorithm can ameliorate disparity results in areas with depth discontinuities and related by occlusions using dual threshold algorithm, where missing stereo information is substituted from surrounding regions. To demonstrate the effectiveness of the algorithm, we present the two groups of experimental results for four widely used standard stereo data sets, including discussion on performance and comparison with other methods, which show that the algorithm has not only a fast speed, but also significantly improves the efficiency of holistic optimization.