In current federated learning frameworks,a central server randomly selects a small number of clients to train local models at the beginning of each global iteration.Since clients’local data are non-dependent and iden...In current federated learning frameworks,a central server randomly selects a small number of clients to train local models at the beginning of each global iteration.Since clients’local data are non-dependent and identically distributed,partial local models are not consistent with the global model.Existing studies employ model cleaning methods to find inconsistent local models.Model cleaning methods measure the cosine similarity between local models and the global model.The inconsistent local model is cleaned out and will not be aggregated for the next global model.However,model cleaning methods incur negative effects such as large computation overheads and limited updates.In this paper,we propose a data distribution optimization method,called federated distribution optimization(FedDO),aiming to overcome the shortcomings of model cleaning methods.FedDO calculates the gradient of the Jensen-Shannon divergence to decrease the discrepancy between selected clients’data distribution and the overall data distribution.We test our method on the multi-classification regression model,the multi-layer perceptron,and the convolutional neural network model on a handwritten digital image dataset.Compared with model cleaning methods,FedDO improves the training accuracy by 1.8%,2.6%,and 5.6%,respectively.展开更多
Most of the existing machine learning studies in logs interpretation do not consider the data distribution discrepancy issue,so the trained model cannot well generalize to the unseen data without calibrating the logs....Most of the existing machine learning studies in logs interpretation do not consider the data distribution discrepancy issue,so the trained model cannot well generalize to the unseen data without calibrating the logs.In this paper,we formulated the geophysical logs calibration problem and give its statistical explanation,and then exhibited an interpretable machine learning method,i.e.,Unilateral Alignment,which could align the logs from one well to another without losing the physical meanings.The involved UA method is an unsupervised feature domain adaptation method,so it does not rely on any labels from cores.The experiments in 3 wells and 6 tasks showed the effectiveness and interpretability from multiple views.展开更多
基金supported in part by the National Key R&D Program of China(No.2018YFB2101100)National Natural Science Foundation of China(Nos.62402519,61932001,and 61872376).
文摘In current federated learning frameworks,a central server randomly selects a small number of clients to train local models at the beginning of each global iteration.Since clients’local data are non-dependent and identically distributed,partial local models are not consistent with the global model.Existing studies employ model cleaning methods to find inconsistent local models.Model cleaning methods measure the cosine similarity between local models and the global model.The inconsistent local model is cleaned out and will not be aggregated for the next global model.However,model cleaning methods incur negative effects such as large computation overheads and limited updates.In this paper,we propose a data distribution optimization method,called federated distribution optimization(FedDO),aiming to overcome the shortcomings of model cleaning methods.FedDO calculates the gradient of the Jensen-Shannon divergence to decrease the discrepancy between selected clients’data distribution and the overall data distribution.We test our method on the multi-classification regression model,the multi-layer perceptron,and the convolutional neural network model on a handwritten digital image dataset.Compared with model cleaning methods,FedDO improves the training accuracy by 1.8%,2.6%,and 5.6%,respectively.
基金Supported in part by the National Natural Science Foundation of China under Grant 61903353in part by the SINOPEC Programmes for Science and Technology Development under Grant PE19008-8.
文摘Most of the existing machine learning studies in logs interpretation do not consider the data distribution discrepancy issue,so the trained model cannot well generalize to the unseen data without calibrating the logs.In this paper,we formulated the geophysical logs calibration problem and give its statistical explanation,and then exhibited an interpretable machine learning method,i.e.,Unilateral Alignment,which could align the logs from one well to another without losing the physical meanings.The involved UA method is an unsupervised feature domain adaptation method,so it does not rely on any labels from cores.The experiments in 3 wells and 6 tasks showed the effectiveness and interpretability from multiple views.