We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a ...We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a special case,a lower bound for preconditioners defined via the method of successive subspace corrections.展开更多
In this paper,we propose a novel algorithm called neuron-wise parallel subspace cor-rection method for the finite neuron method that approximates numerical solutions of partial differential equations(PDEs)using neural...In this paper,we propose a novel algorithm called neuron-wise parallel subspace cor-rection method for the finite neuron method that approximates numerical solutions of partial differential equations(PDEs)using neural network functions.Despite extremely extensive research activities in applying neural networks for numerical PDEs,there is still a serious lack of effective training algorithms that can achieve adequate accuracy,even for one-dimensional problems.Based on recent results on the spectral properties of lin-ear layers and analysis for single neuron problems,we develop a special type of subspace correction method that optimizes the linear layer and each neuron in the nonlinear layer separately.An optimal preconditioner that resolves the ill-conditioning of the linear layer is presented for one-dimensional problems,so that the linear layer is trained in a uniform number of iterations with respect to the number of neurons.In each single neuron problem,a local minimum is found by a superlinearly convergent algorithm.Numerical experiments on function approximation problems and PDEs demonstrate better performance of the proposed method than other gradient-based methods.展开更多
This paper gives a new subspace correction algorithm for nonlinear unconstrained convex optimization problems based on the multigrid approach proposed by S. Nash in 2000 and the subspace correction algorithm proposed ...This paper gives a new subspace correction algorithm for nonlinear unconstrained convex optimization problems based on the multigrid approach proposed by S. Nash in 2000 and the subspace correction algorithm proposed by X. Tai and J. Xu in 2001. Under some reasonable assumptions, we obtain the convergence as well as a convergence rate estimate for the algorithm. Numerical results show that the algorithm is effective.展开更多
In this paper, a modified additive Schwarz finite difference algorithm is applied in the heat conduction equation of the compact difference scheme. The algorithm is on the basis of domain decomposition and the subspac...In this paper, a modified additive Schwarz finite difference algorithm is applied in the heat conduction equation of the compact difference scheme. The algorithm is on the basis of domain decomposition and the subspace correction. The basic train of thought is the introduction of the units function decomposition and reasonable distribution of the overlap of correction. The residual correction is conducted on each subspace while the computation is completely parallel. The theoretical analysis shows that this method is completely characterized by parallel.展开更多
The convergence analysis on the general iterative methods for the symmetric and positive semidefinite problems is presented in this paper. First, formulated are refined necessary and sumcient conditions for the energy...The convergence analysis on the general iterative methods for the symmetric and positive semidefinite problems is presented in this paper. First, formulated are refined necessary and sumcient conditions for the energy norm convergence for iterative methods. Some illustrative examples for the conditions are also provided. The sharp convergence rate identity for the Gauss-Seidel method for the semidefinite system is obtained relying only on the pure matrix manipulations which guides us to obtain the convergence rate identity for the general successive subspace correction methods. The convergence rate identity for the successive subspace correction methods is obtained under the new conditions that the local correction schemes possess the local energy norm convergence. A convergence rate estimate is then derived in terms of the exact subspace solvers and the parameters that appear in the conditions. The uniform convergence of multigrid method for a model problem is proved by the convergence rate identity. The work can be regradled as unified and simplified analysis on the convergence of iteration methods for semidefinite problems [8, 9].展开更多
文摘We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a special case,a lower bound for preconditioners defined via the method of successive subspace corrections.
基金supported in part by the NRF Grant funded by MSIT(No.2021R1C1C2095193)in part by the KAUST Baseline Research Fund.An early version of this paper can be found in[25].
文摘In this paper,we propose a novel algorithm called neuron-wise parallel subspace cor-rection method for the finite neuron method that approximates numerical solutions of partial differential equations(PDEs)using neural network functions.Despite extremely extensive research activities in applying neural networks for numerical PDEs,there is still a serious lack of effective training algorithms that can achieve adequate accuracy,even for one-dimensional problems.Based on recent results on the spectral properties of lin-ear layers and analysis for single neuron problems,we develop a special type of subspace correction method that optimizes the linear layer and each neuron in the nonlinear layer separately.An optimal preconditioner that resolves the ill-conditioning of the linear layer is presented for one-dimensional problems,so that the linear layer is trained in a uniform number of iterations with respect to the number of neurons.In each single neuron problem,a local minimum is found by a superlinearly convergent algorithm.Numerical experiments on function approximation problems and PDEs demonstrate better performance of the proposed method than other gradient-based methods.
基金Supported by the National Nature Science Foundation of China (No.10971058)the Key Project of Chinese Ministry of Education (No.309023)
文摘This paper gives a new subspace correction algorithm for nonlinear unconstrained convex optimization problems based on the multigrid approach proposed by S. Nash in 2000 and the subspace correction algorithm proposed by X. Tai and J. Xu in 2001. Under some reasonable assumptions, we obtain the convergence as well as a convergence rate estimate for the algorithm. Numerical results show that the algorithm is effective.
基金Supported by the School Youth Foundation Project Funding of Anqing Teacher’s College(KJ201108)
文摘In this paper, a modified additive Schwarz finite difference algorithm is applied in the heat conduction equation of the compact difference scheme. The algorithm is on the basis of domain decomposition and the subspace correction. The basic train of thought is the introduction of the units function decomposition and reasonable distribution of the overlap of correction. The residual correction is conducted on each subspace while the computation is completely parallel. The theoretical analysis shows that this method is completely characterized by parallel.
文摘The convergence analysis on the general iterative methods for the symmetric and positive semidefinite problems is presented in this paper. First, formulated are refined necessary and sumcient conditions for the energy norm convergence for iterative methods. Some illustrative examples for the conditions are also provided. The sharp convergence rate identity for the Gauss-Seidel method for the semidefinite system is obtained relying only on the pure matrix manipulations which guides us to obtain the convergence rate identity for the general successive subspace correction methods. The convergence rate identity for the successive subspace correction methods is obtained under the new conditions that the local correction schemes possess the local energy norm convergence. A convergence rate estimate is then derived in terms of the exact subspace solvers and the parameters that appear in the conditions. The uniform convergence of multigrid method for a model problem is proved by the convergence rate identity. The work can be regradled as unified and simplified analysis on the convergence of iteration methods for semidefinite problems [8, 9].