Five-dimensional seismic data encompasses seismic reflection wavefield information across three-dimensional space,offset,and observation azimuth.The interpretation of such data offers a novel approach for high-precisi...Five-dimensional seismic data encompasses seismic reflection wavefield information across three-dimensional space,offset,and observation azimuth.The interpretation of such data offers a novel approach for high-precision characterization of complex oil and gas reservoirs.This paper reviews key scientific issues and foundational research related to five-dimensional seismic data interpretation,with a particular emphasis on major advances in techniques involving rock physics theories,seismic attribute analysis,seismic inversion optimization,fracture prediction,in-situ stress estimation,and fluid identification,both domestically and internationally.It further explores the opportunities,challenges,and future directions in the development of theories and methods for interpreting five-dimensional seismic data.Theoretical research and real applications have shown that constructing a five-dimensional seismic rock physics model—incorporating temperature and pressure conditions,strong heterogeneity and anisotropy,and other microscopic rock physics mechanisms—provides the physical basis for seismically identifying different types of complex reservoirs.Additionally,the development of robust inversion and quantitative interpretation methods tailored to fractured reservoirs can address issues such as computational instability and low information utilization often associated with massive high-dimensional datasets.Innovations in fracture prediction technology,leveraging multi-dimensional information fusion attributes—including five-dimensional geometric attributes,azimuthal elastic modulus ellipse fitting,Fourier series decomposition,and azimuthal inversion attributes—have proven effective in enhancing fracture prediction accuracy.Moreover,the establishment of five-dimensional seismic prediction methods for engineering sweet spots(e.g.,reservoir brittleness and in-situ stress)based on anisotropy theory enables effective evaluation of the fracturability of subsurface formations.The application of five-dimensional seismic interpretation theory and technology provides a new pathway for predicting complex reservoirs and oil-gas identification.展开更多
Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) prov...Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.展开更多
Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency o...Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency of multi-view data,while neglecting the diversity among different views as well as the high-order relationships of data,resulting in the loss of valuable complementary information.In this paper,we design a hypergraph regularized diverse deep matrix factorization(HDDMF)model for multi-view data representation,to jointly utilize multi-view diversity and a high-order manifold in a multilayer factorization framework.A novel diversity enhancement term is designed to exploit the structural complementarity between different views of data.Hypergraph regularization is utilized to preserve the high-order geometry structure of data in each view.An efficient iterative optimization algorithm is developed to solve the proposed model with theoretical convergence analysis.Experimental results on five real-world data sets demonstrate that the proposed method significantly outperforms stateof-the-art multi-view learning approaches.展开更多
An algorithm named DPP is addressed.In it,a new model based on the concept of irregularity degree is founded to evaluate the regularity of cells.It generates the structure regularity of cells by exploiting the signal ...An algorithm named DPP is addressed.In it,a new model based on the concept of irregularity degree is founded to evaluate the regularity of cells.It generates the structure regularity of cells by exploiting the signal flow of circuit.Then,it converts the bit slice structure to parallel constraints to enable Q place algorithm.The design flow and the main algorithms are introduced.Finally,the satisfied experimental result of the tool compared with the Cadence placement tool SE is discussed.展开更多
Simultaneous-source acquisition has been recog- nized as an economic and efficient acquisition method, but the direct imaging of the simultaneous-source data produces migration artifacts because of the interference of...Simultaneous-source acquisition has been recog- nized as an economic and efficient acquisition method, but the direct imaging of the simultaneous-source data produces migration artifacts because of the interference of adjacent sources. To overcome this problem, we propose the regularized least-squares reverse time migration method (RLSRTM) using the singular spectrum analysis technique that imposes sparseness constraints on the inverted model. Additionally, the difference spectrum theory of singular values is presented so that RLSRTM can be implemented adaptively to eliminate the migration artifacts. With numerical tests on a fiat layer model and a Marmousi model, we validate the superior imaging quality, efficiency and convergence of RLSRTM compared with LSRTM when dealing with simultaneoussource data, incomplete data and noisy data.展开更多
This paper proposes a Graph regularized Lpsmooth non-negative matrix factorization(GSNMF) method by incorporating graph regularization and L_p smoothing constraint, which considers the intrinsic geometric information ...This paper proposes a Graph regularized Lpsmooth non-negative matrix factorization(GSNMF) method by incorporating graph regularization and L_p smoothing constraint, which considers the intrinsic geometric information of a data set and produces smooth and stable solutions. The main contributions are as follows: first, graph regularization is added into NMF to discover the hidden semantics and simultaneously respect the intrinsic geometric structure information of a data set. Second,the Lpsmoothing constraint is incorporated into NMF to combine the merits of isotropic(L_2-norm) and anisotropic(L_1-norm)diffusion smoothing, and produces a smooth and more accurate solution to the optimization problem. Finally, the update rules and proof of convergence of GSNMF are given. Experiments on several data sets show that the proposed method outperforms related state-of-the-art methods.展开更多
Objective: Challenges remain in current practices of colorectal cancer(CRC) screening, such as low compliance,low specificities and expensive cost. This study aimed to identify high-risk groups for CRC from the genera...Objective: Challenges remain in current practices of colorectal cancer(CRC) screening, such as low compliance,low specificities and expensive cost. This study aimed to identify high-risk groups for CRC from the general population using regular health examination data.Methods: The study population consist of more than 7,000 CRC cases and more than 140,000 controls. Using regular health examination data, a model detecting CRC cases was derived by the classification and regression trees(CART) algorithm. Receiver operating characteristic(ROC) curve was applied to evaluate the performance of models. The robustness and generalization of the CART model were validated by independent datasets. In addition, the effectiveness of CART-based screening was compared with stool-based screening.Results: After data quality control, 4,647 CRC cases and 133,898 controls free of colorectal neoplasms were used for downstream analysis. The final CART model based on four biomarkers(age, albumin, hematocrit and percent lymphocytes) was constructed. In the test set, the area under ROC curve(AUC) of the CART model was 0.88 [95%confidence interval(95% CI), 0.87-0.90] for detecting CRC. At the cutoff yielding 99.0% specificity, this model’s sensitivity was 62.2%(95% CI, 58.1%-66.2%), thereby achieving a 63-fold enrichment of CRC cases. We validated the robustness of the method across subsets of test set with diverse CRC incidences, aging rates, genders ratio, distributions of tumor stages and locations, and data sources. Importantly, CART-based screening had the higher positive predictive value(1.6%) than fecal immunochemical test(0.3%).Conclusions: As an alternative approach for the early detection of CRC, this study provides a low-cost method using regular health examination data to identify high-risk individuals for CRC for further examinations. The approach can promote early detection of CRC especially in developing countries such as China, where annual health examination is popular but regular CRC-specific screening is rare.展开更多
A great challenge faced by wireless sensor networks(WSNs) is to reduce energy consumption of sensor nodes. Fortunately, the data gathering via random sensing can save energy of sensor nodes. Nevertheless, its randomne...A great challenge faced by wireless sensor networks(WSNs) is to reduce energy consumption of sensor nodes. Fortunately, the data gathering via random sensing can save energy of sensor nodes. Nevertheless, its randomness and density usually result in difficult implementations, high computation complexity and large storage spaces in practical settings. So the deterministic sparse sensing matrices are desired in some situations. However,it is difficult to guarantee the performance of deterministic sensing matrix by the acknowledged metrics. In this paper, we construct a class of deterministic sparse sensing matrices with statistical versions of restricted isometry property(St RIP) via regular low density parity check(RLDPC) matrices. The key idea of our construction is to achieve small mutual coherence of the matrices by confining the column weights of RLDPC matrices such that St RIP is satisfied. Besides, we prove that the constructed sensing matrices have the same scale of measurement numbers as the dense measurements. We also propose a data gathering method based on RLDPC matrix. Experimental results verify that the constructed sensing matrices have better reconstruction performance, compared to the Gaussian, Bernoulli, and CSLDPC matrices. And we also verify that the data gathering via RLDPC matrix can reduce energy consumption of WSNs.展开更多
In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great po...In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great portion of data that we do not know which set it belongs to. This part of data is called unlabeled data, while the rest from definite datasets is called labeled data. We propose a novel method called regularized canonical correlation analysis (RCCA), which makes use of both labeled and unlabeled samples. Specifically, we learn to approximate canonical correlation as if all data were labeled. Then, we describe a generalization of RCCA for the multi-set situation. Experiments on four real world datasets, Yeast, Cloud, Iris, and Haberman, demonstrate that, by incorporating the unlabeled data points, the accuracy of correlation coefficients can be improved by over 30%.展开更多
In this paper we discuss the edge-preserving regularization method in the reconstruction of physical parameters from geophysical data such as seismic and ground-penetrating radar data. In the regularization method a p...In this paper we discuss the edge-preserving regularization method in the reconstruction of physical parameters from geophysical data such as seismic and ground-penetrating radar data. In the regularization method a potential function of model parameters and its corresponding functions are introduced. This method is stable and able to preserve boundaries, and protect resolution. The effect of regularization depends to a great extent on the suitable choice of regularization parameters. The influence of the edge-preserving parameters on the reconstruction results is investigated and the relationship between the regularization parameters and the error of data is described.展开更多
A global weak solution to the isentropic Navier-Stokes equation with initial data around a constant state in the L^(1)∩BV class was constructed in[1].In the current paper,we will continue to study the uniqueness and ...A global weak solution to the isentropic Navier-Stokes equation with initial data around a constant state in the L^(1)∩BV class was constructed in[1].In the current paper,we will continue to study the uniqueness and regularity of the constructed solution.The key ingredients are the Holder continuity estimates of the heat kernel in both spatial and time variables.With these finer estimates,we obtain higher order regularity of the constructed solution to Navier-Stokes equation,so that all of the derivatives in the equation of conservative form are in the strong sense.Moreover,this regularity also allows us to identify a function space such that the stability of the solutions can be established there,which eventually implies the uniqueness.展开更多
Regularization inversion uses constraints and a regularization factor to solve ill- posed inversion problems in geophysics. The choice of the regularization factor and of the initial model is critical in regularizatio...Regularization inversion uses constraints and a regularization factor to solve ill- posed inversion problems in geophysics. The choice of the regularization factor and of the initial model is critical in regularization inversion. To deal with these problems, we propose a multiobjective particle swarm inversion (MOPSOI) algorithm to simultaneously minimize the data misfit and model constraints, and obtain a multiobjective inversion solution set without the gradient information of the objective function and the regularization factor. We then choose the optimum solution from the solution set based on the trade-off between data misfit and constraints that substitute for the regularization factor. The inversion of synthetic two-dimensional magnetic data suggests that the MOPSOI algorithm can obtain as many feasible solutions as possible; thus, deeper insights of the inversion process can be gained and more reasonable solutions can be obtained by balancing the data misfit and constraints. The proposed MOPSOI algorithm can deal with the problems of choosing the right regularization factor and the initial model.展开更多
Tikhonov regularization(TR) method has played a very important role in the gravity data and magnetic data process. In this paper, the Tikhonov regularization method with respect to the inversion of gravity data is d...Tikhonov regularization(TR) method has played a very important role in the gravity data and magnetic data process. In this paper, the Tikhonov regularization method with respect to the inversion of gravity data is discussed. and the extrapolated TR method(EXTR) is introduced to improve the fitting error. Furthermore, the effect of the parameters in the EXTR method on the fitting error, number of iterations, and inversion results are discussed in details. The computation results using a synthetic model with the same and different densities indicated that. compared with the TR method, the EXTR method not only achieves the a priori fitting error level set by the interpreter but also increases the fitting precision, although it increases the computation time and number of iterations. And the EXTR inversion results are more compact than the TR inversion results, which are more divergent. The range of the inversion data is closer to the default range of the model parameters, and the model features and default model density distribution agree well.展开更多
In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a meth...In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.展开更多
基金supported by the Key Projects of the National Natural Science Foundation of China(Grant Nos.42430809,42030103).
文摘Five-dimensional seismic data encompasses seismic reflection wavefield information across three-dimensional space,offset,and observation azimuth.The interpretation of such data offers a novel approach for high-precision characterization of complex oil and gas reservoirs.This paper reviews key scientific issues and foundational research related to five-dimensional seismic data interpretation,with a particular emphasis on major advances in techniques involving rock physics theories,seismic attribute analysis,seismic inversion optimization,fracture prediction,in-situ stress estimation,and fluid identification,both domestically and internationally.It further explores the opportunities,challenges,and future directions in the development of theories and methods for interpreting five-dimensional seismic data.Theoretical research and real applications have shown that constructing a five-dimensional seismic rock physics model—incorporating temperature and pressure conditions,strong heterogeneity and anisotropy,and other microscopic rock physics mechanisms—provides the physical basis for seismically identifying different types of complex reservoirs.Additionally,the development of robust inversion and quantitative interpretation methods tailored to fractured reservoirs can address issues such as computational instability and low information utilization often associated with massive high-dimensional datasets.Innovations in fracture prediction technology,leveraging multi-dimensional information fusion attributes—including five-dimensional geometric attributes,azimuthal elastic modulus ellipse fitting,Fourier series decomposition,and azimuthal inversion attributes—have proven effective in enhancing fracture prediction accuracy.Moreover,the establishment of five-dimensional seismic prediction methods for engineering sweet spots(e.g.,reservoir brittleness and in-situ stress)based on anisotropy theory enables effective evaluation of the fracturability of subsurface formations.The application of five-dimensional seismic interpretation theory and technology provides a new pathway for predicting complex reservoirs and oil-gas identification.
基金financially supported by The 2011 Prospective Research Project of SINOPEC(P11096)
文摘Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.
基金This work was supported by the National Natural Science Foundation of China(62073087,62071132,61973090).
文摘Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency of multi-view data,while neglecting the diversity among different views as well as the high-order relationships of data,resulting in the loss of valuable complementary information.In this paper,we design a hypergraph regularized diverse deep matrix factorization(HDDMF)model for multi-view data representation,to jointly utilize multi-view diversity and a high-order manifold in a multilayer factorization framework.A novel diversity enhancement term is designed to exploit the structural complementarity between different views of data.Hypergraph regularization is utilized to preserve the high-order geometry structure of data in each view.An efficient iterative optimization algorithm is developed to solve the proposed model with theoretical convergence analysis.Experimental results on five real-world data sets demonstrate that the proposed method significantly outperforms stateof-the-art multi-view learning approaches.
文摘An algorithm named DPP is addressed.In it,a new model based on the concept of irregularity degree is founded to evaluate the regularity of cells.It generates the structure regularity of cells by exploiting the signal flow of circuit.Then,it converts the bit slice structure to parallel constraints to enable Q place algorithm.The design flow and the main algorithms are introduced.Finally,the satisfied experimental result of the tool compared with the Cadence placement tool SE is discussed.
基金financial support from the National Natural Science Foundation of China (Grant Nos. 41104069, 41274124)National Key Basic Research Program of China (973 Program) (Grant No. 2014CB239006)+2 种基金National Science and Technology Major Project (Grant No. 2011ZX05014-001-008)the Open Foundation of SINOPEC Key Laboratory of Geophysics (Grant No. 33550006-15-FW2099-0033)the Fundamental Research Funds for the Central Universities (Grant No. 16CX06046A)
文摘Simultaneous-source acquisition has been recog- nized as an economic and efficient acquisition method, but the direct imaging of the simultaneous-source data produces migration artifacts because of the interference of adjacent sources. To overcome this problem, we propose the regularized least-squares reverse time migration method (RLSRTM) using the singular spectrum analysis technique that imposes sparseness constraints on the inverted model. Additionally, the difference spectrum theory of singular values is presented so that RLSRTM can be implemented adaptively to eliminate the migration artifacts. With numerical tests on a fiat layer model and a Marmousi model, we validate the superior imaging quality, efficiency and convergence of RLSRTM compared with LSRTM when dealing with simultaneoussource data, incomplete data and noisy data.
基金supported by the National Natural Science Foundation of China(61702251,61363049,11571011)the State Scholarship Fund of China Scholarship Council(CSC)(201708360040)+3 种基金the Natural Science Foundation of Jiangxi Province(20161BAB212033)the Natural Science Basic Research Plan in Shaanxi Province of China(2018JM6030)the Doctor Scientific Research Starting Foundation of Northwest University(338050050)Youth Academic Talent Support Program of Northwest University
文摘This paper proposes a Graph regularized Lpsmooth non-negative matrix factorization(GSNMF) method by incorporating graph regularization and L_p smoothing constraint, which considers the intrinsic geometric information of a data set and produces smooth and stable solutions. The main contributions are as follows: first, graph regularization is added into NMF to discover the hidden semantics and simultaneously respect the intrinsic geometric structure information of a data set. Second,the Lpsmoothing constraint is incorporated into NMF to combine the merits of isotropic(L_2-norm) and anisotropic(L_1-norm)diffusion smoothing, and produces a smooth and more accurate solution to the optimization problem. Finally, the update rules and proof of convergence of GSNMF are given. Experiments on several data sets show that the proposed method outperforms related state-of-the-art methods.
基金supported by funding from Beijing Municipal Science & Technology Commission, Clinical Application and Development of Capital Characteristic (No. Z161100000516003)National Natural Science Foundation of China (No. 31871266)
文摘Objective: Challenges remain in current practices of colorectal cancer(CRC) screening, such as low compliance,low specificities and expensive cost. This study aimed to identify high-risk groups for CRC from the general population using regular health examination data.Methods: The study population consist of more than 7,000 CRC cases and more than 140,000 controls. Using regular health examination data, a model detecting CRC cases was derived by the classification and regression trees(CART) algorithm. Receiver operating characteristic(ROC) curve was applied to evaluate the performance of models. The robustness and generalization of the CART model were validated by independent datasets. In addition, the effectiveness of CART-based screening was compared with stool-based screening.Results: After data quality control, 4,647 CRC cases and 133,898 controls free of colorectal neoplasms were used for downstream analysis. The final CART model based on four biomarkers(age, albumin, hematocrit and percent lymphocytes) was constructed. In the test set, the area under ROC curve(AUC) of the CART model was 0.88 [95%confidence interval(95% CI), 0.87-0.90] for detecting CRC. At the cutoff yielding 99.0% specificity, this model’s sensitivity was 62.2%(95% CI, 58.1%-66.2%), thereby achieving a 63-fold enrichment of CRC cases. We validated the robustness of the method across subsets of test set with diverse CRC incidences, aging rates, genders ratio, distributions of tumor stages and locations, and data sources. Importantly, CART-based screening had the higher positive predictive value(1.6%) than fecal immunochemical test(0.3%).Conclusions: As an alternative approach for the early detection of CRC, this study provides a low-cost method using regular health examination data to identify high-risk individuals for CRC for further examinations. The approach can promote early detection of CRC especially in developing countries such as China, where annual health examination is popular but regular CRC-specific screening is rare.
基金supported by the National Natural Science Foundation of China(61307121)ABRP of Datong(2017127)the Ph.D.’s Initiated Research Projects of Datong University(2013-B-17,2015-B-05)
文摘A great challenge faced by wireless sensor networks(WSNs) is to reduce energy consumption of sensor nodes. Fortunately, the data gathering via random sensing can save energy of sensor nodes. Nevertheless, its randomness and density usually result in difficult implementations, high computation complexity and large storage spaces in practical settings. So the deterministic sparse sensing matrices are desired in some situations. However,it is difficult to guarantee the performance of deterministic sensing matrix by the acknowledged metrics. In this paper, we construct a class of deterministic sparse sensing matrices with statistical versions of restricted isometry property(St RIP) via regular low density parity check(RLDPC) matrices. The key idea of our construction is to achieve small mutual coherence of the matrices by confining the column weights of RLDPC matrices such that St RIP is satisfied. Besides, we prove that the constructed sensing matrices have the same scale of measurement numbers as the dense measurements. We also propose a data gathering method based on RLDPC matrix. Experimental results verify that the constructed sensing matrices have better reconstruction performance, compared to the Gaussian, Bernoulli, and CSLDPC matrices. And we also verify that the data gathering via RLDPC matrix can reduce energy consumption of WSNs.
基金Project (No. 5959438) supported by Microsoft (China) Co., Ltd
文摘In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great portion of data that we do not know which set it belongs to. This part of data is called unlabeled data, while the rest from definite datasets is called labeled data. We propose a novel method called regularized canonical correlation analysis (RCCA), which makes use of both labeled and unlabeled samples. Specifically, we learn to approximate canonical correlation as if all data were labeled. Then, we describe a generalization of RCCA for the multi-set situation. Experiments on four real world datasets, Yeast, Cloud, Iris, and Haberman, demonstrate that, by incorporating the unlabeled data points, the accuracy of correlation coefficients can be improved by over 30%.
基金supported in part by the National Natural Science Foundation of China under Grant-in-Aid 40574053the Program for New Century Excellent Talents in University of China (NCET-06-0602)the National 973 Key Basic Research Development Program (No.2007CB209601)
文摘In this paper we discuss the edge-preserving regularization method in the reconstruction of physical parameters from geophysical data such as seismic and ground-penetrating radar data. In the regularization method a potential function of model parameters and its corresponding functions are introduced. This method is stable and able to preserve boundaries, and protect resolution. The effect of regularization depends to a great extent on the suitable choice of regularization parameters. The influence of the edge-preserving parameters on the reconstruction results is investigated and the relationship between the regularization parameters and the error of data is described.
基金partially the National Key R&D Program of China(2022YFA1007300)the NSFC(11901386,12031013)+2 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA25010403)the NSFC(11801194,11971188)the Hubei Key Laboratory of Engineering Modeling and Scientific Computing。
文摘A global weak solution to the isentropic Navier-Stokes equation with initial data around a constant state in the L^(1)∩BV class was constructed in[1].In the current paper,we will continue to study the uniqueness and regularity of the constructed solution.The key ingredients are the Holder continuity estimates of the heat kernel in both spatial and time variables.With these finer estimates,we obtain higher order regularity of the constructed solution to Navier-Stokes equation,so that all of the derivatives in the equation of conservative form are in the strong sense.Moreover,this regularity also allows us to identify a function space such that the stability of the solutions can be established there,which eventually implies the uniqueness.
基金supported by the Natural Science Foundation of China(No.61273179)Department of Education,Science and Technology Research Project of Hubei Province of China(No.D20131206,No.20141304)
文摘Regularization inversion uses constraints and a regularization factor to solve ill- posed inversion problems in geophysics. The choice of the regularization factor and of the initial model is critical in regularization inversion. To deal with these problems, we propose a multiobjective particle swarm inversion (MOPSOI) algorithm to simultaneously minimize the data misfit and model constraints, and obtain a multiobjective inversion solution set without the gradient information of the objective function and the regularization factor. We then choose the optimum solution from the solution set based on the trade-off between data misfit and constraints that substitute for the regularization factor. The inversion of synthetic two-dimensional magnetic data suggests that the MOPSOI algorithm can obtain as many feasible solutions as possible; thus, deeper insights of the inversion process can be gained and more reasonable solutions can be obtained by balancing the data misfit and constraints. The proposed MOPSOI algorithm can deal with the problems of choosing the right regularization factor and the initial model.
基金supported by the National Scientific and Technological Plan(Nos.2009BAB43B00 and 2009BAB43B01)
文摘Tikhonov regularization(TR) method has played a very important role in the gravity data and magnetic data process. In this paper, the Tikhonov regularization method with respect to the inversion of gravity data is discussed. and the extrapolated TR method(EXTR) is introduced to improve the fitting error. Furthermore, the effect of the parameters in the EXTR method on the fitting error, number of iterations, and inversion results are discussed in details. The computation results using a synthetic model with the same and different densities indicated that. compared with the TR method, the EXTR method not only achieves the a priori fitting error level set by the interpreter but also increases the fitting precision, although it increases the computation time and number of iterations. And the EXTR inversion results are more compact than the TR inversion results, which are more divergent. The range of the inversion data is closer to the default range of the model parameters, and the model features and default model density distribution agree well.
文摘In non-independent and identically distributed(non-IID)data environments,model performance often degrades significantly.To address this issue,two improvement methods are proposed:FedReg and FedReg^(*).FedReg is a method based on hybrid regularization aimed at enhancing federated learning in non-IID scenarios.It introduces hybrid regularization to replace traditional L2 regularization,combining the advantages of L1 and L2 regularization to enable feature selection while preventing overfitting.This method better adapts to the diverse data distributions of different clients,improving the overall model performance.FedReg^(*)combines hybrid regularization with weighted model aggregation.In addition to the benefits of hybrid regularization,FedReg^(*)applies a weighted averaging method in the model aggregation process,calculating weights based on the cosine similarity between each client gradient and the global gradient to more reasonably distribute client contributions.By considering variations in data quality and quantity among clients,FedReg^(*)highlights the importance of key clients and enhances the model’s generalization performance.These improvement methods enhance model accuracy and communication efficiency.