The symmetric linear system gives us many simplifications and a possibility to adapt the computations to the computer at hand in order to achieve better performance. The aim of this paper is to consider the block bidi...The symmetric linear system gives us many simplifications and a possibility to adapt the computations to the computer at hand in order to achieve better performance. The aim of this paper is to consider the block bidiagonalization methods derived from a symmetric augmented multiple linear systems and make a comparison with the block GMRES and block biconjugate gradient methods.展开更多
Many problems in science and engineering require solving large consistent linear systems. This paper presents a relaxed greedy block Kaczmarz method (RGBK) and an accelerated greedy block Kaczmarz method (AGBK) for so...Many problems in science and engineering require solving large consistent linear systems. This paper presents a relaxed greedy block Kaczmarz method (RGBK) and an accelerated greedy block Kaczmarz method (AGBK) for solving large-size consistent linear systems. The RGBK algorithm extends the greedy block Kaczmarz algorithm (GBK) presented by Niu and Zheng in <a href="#ref1">[1]</a> by introducing a relaxation parameter to the iteration formulation of GBK, and the AGBK algorithm uses different iterative update rules to minimize the running time. The convergence of the RGBK is proved and a method to determine an optimal parameter is provided. Several examples are presented to show the effectiveness of the proposed methods for overdetermined and underdetermined consistent linear systems with dense and sparse coefficient matrix.展开更多
This paper proposes a two-parameter block triangular splitting(TPTS)preconditioner for the general block two-by-two linear systems.The eigenvalues of the corresponding preconditioned matrix are proved to cluster aroun...This paper proposes a two-parameter block triangular splitting(TPTS)preconditioner for the general block two-by-two linear systems.The eigenvalues of the corresponding preconditioned matrix are proved to cluster around 0 or 1 under mild conditions.The limited numerical results show that the TPTS preconditioner is more efficient than the classic block-diagonal and block-triangular preconditioners when applied to the flexible generalized minimal residual(FGMRES)method.展开更多
In this paper, we provide new preconditioner for saddle point linear systems with (1,1) blocks that have a high nullity. The preconditioner is block triangular diagonal with two variable relaxation paremeters and it i...In this paper, we provide new preconditioner for saddle point linear systems with (1,1) blocks that have a high nullity. The preconditioner is block triangular diagonal with two variable relaxation paremeters and it is extension of results in [1] and [2]. Theoretical analysis shows that all eigenvalues of preconditioned matrix is strongly clustered. Finally, numerical tests confirm our analysis.展开更多
Iterative methods that take advantage of efficient block operations and block communications are popular research topics in parallel computation. These methods are especially important on Massively Parallel Processors...Iterative methods that take advantage of efficient block operations and block communications are popular research topics in parallel computation. These methods are especially important on Massively Parallel Processors (MPP). This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. It is shown that the new algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s. m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.展开更多
In this paper, we investigate the block Lanczos algorithm for solving large sparse symmetric linear systems with multiple right-hand sides, and show how to incorporate deflation to drop converged linear systems using ...In this paper, we investigate the block Lanczos algorithm for solving large sparse symmetric linear systems with multiple right-hand sides, and show how to incorporate deflation to drop converged linear systems using a natural convergence criterion, and present an adaptive block Lanczos algorithm. We propose also a block version of Paige and Saunders’ MINRES method for iterative solution of symmetric linear systems, and describe important implementation details. We establish a relationship between the block Lanczos algorithm and block MINRES algorithm, and compare the numerical performance of the Lanczos algorithm and MINRES method for symmetric linear systems applied to a sequence of right hand sides with that of the block Lanczos algorithm and block MINRES algorithm for multiple linear systems simultaneously.[WT5,5”HZ]展开更多
In this paper, we provide a generalized block-by-block method for constructing block-by-block systems to solve the system of linear Volterra integral equations of the second kind, and then deduce some of the special c...In this paper, we provide a generalized block-by-block method for constructing block-by-block systems to solve the system of linear Volterra integral equations of the second kind, and then deduce some of the special cases. Compared with the expansion method and He's homotopy perturbation method, respectively numerical examples are given to certify the effectiveness of the method. The results show that the block-by-block method is very effective, simple, and of high accuracy in solving the system of linear Volterra integral equations of the second kind.展开更多
The restrictively preconditioned conjugate gradient (RPCG) method is further developed to solve large sparse system of linear equations of a block two-by-two structure. The basic idea of this new approach is that we...The restrictively preconditioned conjugate gradient (RPCG) method is further developed to solve large sparse system of linear equations of a block two-by-two structure. The basic idea of this new approach is that we apply the RPCG method to the normal-residual equation of the block two-by-two linear system and construct each required approximate matrix by making use of the incomplete orthogonal factorization of the involved matrix blocks. Numerical experiments show that the new method, called the restrictively preconditioned conjugate gradient on normal residual (RPCGNR), is more robust and effective than either the known RPCG method or the standard conjugate gradient on normal residual (CGNR) method when being used for solving the large sparse saddle point problems.展开更多
In this paper, we present a unified approach to decomposing a special class of block tridiagonal matrices <i>K</i> (<i>α</i> ,<i>β</i> ) into block diagonal matrices using similar...In this paper, we present a unified approach to decomposing a special class of block tridiagonal matrices <i>K</i> (<i>α</i> ,<i>β</i> ) into block diagonal matrices using similarity transformations. The matrices <i>K</i> (<i>α</i> ,<i>β</i> )∈ <i>R</i><sup><i>pq</i>× <i>pq</i></sup> are of the form <i>K</i> (<i>α</i> ,<i>β</i> = block-tridiag[<i>β B</i>,<i>A</i>,<i>α B</i>] for three special pairs of (<i>α</i> ,<i>β</i> ): <i>K</i> (1,1), <i>K</i> (1,2) and <i>K</i> (2,2) , where the matrices <i>A</i> and <i>B</i>, <i>A</i>, <i>B</i>∈ <i>R</i><sup><i>p</i>× <i>q</i></sup> , are general square matrices. The decomposed block diagonal matrices <img src="Edit_00717830-3b3b-4856-8ecd-a9db983fef19.png" width="15" height="15" alt="" />(<i>α</i> ,<i>β</i> ) for the three cases are all of the form: <img src="Edit_71ffcd27-6acc-4922-b5e2-f4be15b9b8dc.png" width="15" height="15" alt="" />(<i>α</i> ,<i>β</i> ) = <i>D</i><sub>1</sub> (<i>α</i> ,<i>β</i> ) ⊕ <i>D</i><sub>2</sub> (<i>α</i> ,<i>β</i> ) ⊕---⊕ <i>D</i><sub>q</sub> (<i>α</i> ,<i>β</i> ) , where <i>D<sub>k</sub></i> (<i>α</i> ,<i>β</i> ) = <i>A</i>+ 2cos ( <i>θ<sub>k</sub></i> (<i>α</i> ,<i>β</i> )) <i>B</i>, in which <i>θ<sub>k</sub></i> (<i>α</i> ,<i>β</i> ) , k = 1,2, --- q , depend on the values of <i>α</i> and <i>β</i>. Our decomposition method is closely related to the classical fast Poisson solver using Fourier analysis. Unlike the fast Poisson solver, our approach decomposes <i>K</i> (<i>α</i> ,<i>β</i> ) into <i>q</i> diagonal blocks, instead of <i>p</i> blocks. Furthermore, our proposed approach does not require matrices <i>A</i> and <i>B</i> to be symmetric and commute, and employs only the eigenvectors of the tridiagonal matrix <i>T</i> (<i>α</i> ,<i>β</i> ) = tridiag[<i>β b</i>, <i>a</i>,<i>αb</i>] in a block form, where <i>a</i> and <i>b</i> are scalars. The transformation matrices, their inverses, and the explicit form of the decomposed block diagonal matrices are derived in this paper. Numerical examples and experiments are also presented to demonstrate the validity and usefulness of the approach. Due to the decoupled nature of the decomposed matrices, this approach lends itself to parallel and distributed computations for solving both linear systems and eigenvalue problems using multiprocessors.展开更多
基金The research of this author was supported by the National Natural Science Foundation of China,the JiangsuProvince Natural Science Foundation,the Jiangsu Province"333Engineering" Foundation and the Jiangsu Province"Qinglan Engineering" Foundation
文摘The symmetric linear system gives us many simplifications and a possibility to adapt the computations to the computer at hand in order to achieve better performance. The aim of this paper is to consider the block bidiagonalization methods derived from a symmetric augmented multiple linear systems and make a comparison with the block GMRES and block biconjugate gradient methods.
文摘Many problems in science and engineering require solving large consistent linear systems. This paper presents a relaxed greedy block Kaczmarz method (RGBK) and an accelerated greedy block Kaczmarz method (AGBK) for solving large-size consistent linear systems. The RGBK algorithm extends the greedy block Kaczmarz algorithm (GBK) presented by Niu and Zheng in <a href="#ref1">[1]</a> by introducing a relaxation parameter to the iteration formulation of GBK, and the AGBK algorithm uses different iterative update rules to minimize the running time. The convergence of the RGBK is proved and a method to determine an optimal parameter is provided. Several examples are presented to show the effectiveness of the proposed methods for overdetermined and underdetermined consistent linear systems with dense and sparse coefficient matrix.
基金the National Natural Science Foundation of China under Grant Nos.61273311 and 61803247.
文摘This paper proposes a two-parameter block triangular splitting(TPTS)preconditioner for the general block two-by-two linear systems.The eigenvalues of the corresponding preconditioned matrix are proved to cluster around 0 or 1 under mild conditions.The limited numerical results show that the TPTS preconditioner is more efficient than the classic block-diagonal and block-triangular preconditioners when applied to the flexible generalized minimal residual(FGMRES)method.
文摘In this paper, we provide new preconditioner for saddle point linear systems with (1,1) blocks that have a high nullity. The preconditioner is block triangular diagonal with two variable relaxation paremeters and it is extension of results in [1] and [2]. Theoretical analysis shows that all eigenvalues of preconditioned matrix is strongly clustered. Finally, numerical tests confirm our analysis.
文摘Iterative methods that take advantage of efficient block operations and block communications are popular research topics in parallel computation. These methods are especially important on Massively Parallel Processors (MPP). This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. It is shown that the new algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s. m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.
文摘In this paper, we investigate the block Lanczos algorithm for solving large sparse symmetric linear systems with multiple right-hand sides, and show how to incorporate deflation to drop converged linear systems using a natural convergence criterion, and present an adaptive block Lanczos algorithm. We propose also a block version of Paige and Saunders’ MINRES method for iterative solution of symmetric linear systems, and describe important implementation details. We establish a relationship between the block Lanczos algorithm and block MINRES algorithm, and compare the numerical performance of the Lanczos algorithm and MINRES method for symmetric linear systems applied to a sequence of right hand sides with that of the block Lanczos algorithm and block MINRES algorithm for multiple linear systems simultaneously.[WT5,5”HZ]
基金Supported by the National Natural Science Foundation of China(10962008)
文摘In this paper, we provide a generalized block-by-block method for constructing block-by-block systems to solve the system of linear Volterra integral equations of the second kind, and then deduce some of the special cases. Compared with the expansion method and He's homotopy perturbation method, respectively numerical examples are given to certify the effectiveness of the method. The results show that the block-by-block method is very effective, simple, and of high accuracy in solving the system of linear Volterra integral equations of the second kind.
基金supported by the National Basic Research Program (No.2005CB321702)the China NNSF Outstanding Young Scientist Foundation (No.10525102)the National Natural Science Foundation (No.10471146),P.R.China
文摘The restrictively preconditioned conjugate gradient (RPCG) method is further developed to solve large sparse system of linear equations of a block two-by-two structure. The basic idea of this new approach is that we apply the RPCG method to the normal-residual equation of the block two-by-two linear system and construct each required approximate matrix by making use of the incomplete orthogonal factorization of the involved matrix blocks. Numerical experiments show that the new method, called the restrictively preconditioned conjugate gradient on normal residual (RPCGNR), is more robust and effective than either the known RPCG method or the standard conjugate gradient on normal residual (CGNR) method when being used for solving the large sparse saddle point problems.
文摘In this paper, we present a unified approach to decomposing a special class of block tridiagonal matrices <i>K</i> (<i>α</i> ,<i>β</i> ) into block diagonal matrices using similarity transformations. The matrices <i>K</i> (<i>α</i> ,<i>β</i> )∈ <i>R</i><sup><i>pq</i>× <i>pq</i></sup> are of the form <i>K</i> (<i>α</i> ,<i>β</i> = block-tridiag[<i>β B</i>,<i>A</i>,<i>α B</i>] for three special pairs of (<i>α</i> ,<i>β</i> ): <i>K</i> (1,1), <i>K</i> (1,2) and <i>K</i> (2,2) , where the matrices <i>A</i> and <i>B</i>, <i>A</i>, <i>B</i>∈ <i>R</i><sup><i>p</i>× <i>q</i></sup> , are general square matrices. The decomposed block diagonal matrices <img src="Edit_00717830-3b3b-4856-8ecd-a9db983fef19.png" width="15" height="15" alt="" />(<i>α</i> ,<i>β</i> ) for the three cases are all of the form: <img src="Edit_71ffcd27-6acc-4922-b5e2-f4be15b9b8dc.png" width="15" height="15" alt="" />(<i>α</i> ,<i>β</i> ) = <i>D</i><sub>1</sub> (<i>α</i> ,<i>β</i> ) ⊕ <i>D</i><sub>2</sub> (<i>α</i> ,<i>β</i> ) ⊕---⊕ <i>D</i><sub>q</sub> (<i>α</i> ,<i>β</i> ) , where <i>D<sub>k</sub></i> (<i>α</i> ,<i>β</i> ) = <i>A</i>+ 2cos ( <i>θ<sub>k</sub></i> (<i>α</i> ,<i>β</i> )) <i>B</i>, in which <i>θ<sub>k</sub></i> (<i>α</i> ,<i>β</i> ) , k = 1,2, --- q , depend on the values of <i>α</i> and <i>β</i>. Our decomposition method is closely related to the classical fast Poisson solver using Fourier analysis. Unlike the fast Poisson solver, our approach decomposes <i>K</i> (<i>α</i> ,<i>β</i> ) into <i>q</i> diagonal blocks, instead of <i>p</i> blocks. Furthermore, our proposed approach does not require matrices <i>A</i> and <i>B</i> to be symmetric and commute, and employs only the eigenvectors of the tridiagonal matrix <i>T</i> (<i>α</i> ,<i>β</i> ) = tridiag[<i>β b</i>, <i>a</i>,<i>αb</i>] in a block form, where <i>a</i> and <i>b</i> are scalars. The transformation matrices, their inverses, and the explicit form of the decomposed block diagonal matrices are derived in this paper. Numerical examples and experiments are also presented to demonstrate the validity and usefulness of the approach. Due to the decoupled nature of the decomposed matrices, this approach lends itself to parallel and distributed computations for solving both linear systems and eigenvalue problems using multiprocessors.