LSQR, a Lanczos bidiagonalization based Krylov subspace iterative method, and its mathematically equivalent conjugate gradient for least squares problems(CGLS) applied to normal equations system, are commonly used for...LSQR, a Lanczos bidiagonalization based Krylov subspace iterative method, and its mathematically equivalent conjugate gradient for least squares problems(CGLS) applied to normal equations system, are commonly used for large-scale discrete ill-posed problems. It is well known that LSQR and CGLS have regularizing effects, where the number of iterations plays the role of the regularization parameter. However, it has long been unknown whether the regularizing effects are good enough to find best possible regularized solutions. Here a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition(TSVD) method. We establish bounds for the distance between the k-dimensional Krylov subspace and the k-dimensional dominant right singular space. They show that the Krylov subspace captures the dominant right singular space better for severely and moderately ill-posed problems than for mildly ill-posed problems. Our general conclusions are that LSQR has better regularizing effects for the first two kinds of problems than for the third kind, and a hybrid LSQR with additional regularization is generally needed for mildly ill-posed problems. Exploiting the established bounds, we derive an estimate for the accuracy of the rank k approximation generated by Lanczos bidiagonalization. Numerical experiments illustrate that the regularizing effects of LSQR are good enough to compute best possible regularized solutions for severely and moderately ill-posed problems, stronger than our theory, but they are not for mildly ill-posed problems and additional regularization is needed.展开更多
It is well-known that many Krylov solvers for linear systems,eigenvalue problems,andsingular value decomposition problems have very simple and elegant formulas for residual norms.Theseformulas not only allow us to fur...It is well-known that many Krylov solvers for linear systems,eigenvalue problems,andsingular value decomposition problems have very simple and elegant formulas for residual norms.Theseformulas not only allow us to further understand the methods theoretically but also can be usedas cheap stopping criteria without forming approximate solutions and residuals at each step beforeconvergence takes place.LSQR for large sparse linear least squares problems is based on the Lanczosbidiagonalization process and is a Krylov solver.However,there has not yet been an analogouslyelegant formula for residual norms.This paper derives such kind of formula.In addition,the authorgets some other properties of LSQR and its mathematically equivalent CGLS.展开更多
The orthogonal nonnegative matrix factorization (ONMF) has many applications in a variety of areas such as data mining, information processing and pattern recognition. In this paper, we propose a novel initializatio...The orthogonal nonnegative matrix factorization (ONMF) has many applications in a variety of areas such as data mining, information processing and pattern recognition. In this paper, we propose a novel initialization method for the ONMF based on the Lanczos bidiagonalization and the nonnegative approximation of rank one matrix. Numerical experiments are given to show that our initialization strategy is effective and efficient.展开更多
基金supported by National Basic Research Program of China (Grant No. 2011CB302400)National Natural Science Foundation of China (Grant No. 11371219)
文摘LSQR, a Lanczos bidiagonalization based Krylov subspace iterative method, and its mathematically equivalent conjugate gradient for least squares problems(CGLS) applied to normal equations system, are commonly used for large-scale discrete ill-posed problems. It is well known that LSQR and CGLS have regularizing effects, where the number of iterations plays the role of the regularization parameter. However, it has long been unknown whether the regularizing effects are good enough to find best possible regularized solutions. Here a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition(TSVD) method. We establish bounds for the distance between the k-dimensional Krylov subspace and the k-dimensional dominant right singular space. They show that the Krylov subspace captures the dominant right singular space better for severely and moderately ill-posed problems than for mildly ill-posed problems. Our general conclusions are that LSQR has better regularizing effects for the first two kinds of problems than for the third kind, and a hybrid LSQR with additional regularization is generally needed for mildly ill-posed problems. Exploiting the established bounds, we derive an estimate for the accuracy of the rank k approximation generated by Lanczos bidiagonalization. Numerical experiments illustrate that the regularizing effects of LSQR are good enough to compute best possible regularized solutions for severely and moderately ill-posed problems, stronger than our theory, but they are not for mildly ill-posed problems and additional regularization is needed.
基金supported in part by the National Science Foundation of China under Grant No. 10771116the Doctoral Program of the Ministry of Education under Grant No. 20060003003
文摘It is well-known that many Krylov solvers for linear systems,eigenvalue problems,andsingular value decomposition problems have very simple and elegant formulas for residual norms.Theseformulas not only allow us to further understand the methods theoretically but also can be usedas cheap stopping criteria without forming approximate solutions and residuals at each step beforeconvergence takes place.LSQR for large sparse linear least squares problems is based on the Lanczosbidiagonalization process and is a Krylov solver.However,there has not yet been an analogouslyelegant formula for residual norms.This paper derives such kind of formula.In addition,the authorgets some other properties of LSQR and its mathematically equivalent CGLS.
基金Acknowledgments. The work is supported by National Natural Science Foundation of China No. 10961010.
文摘The orthogonal nonnegative matrix factorization (ONMF) has many applications in a variety of areas such as data mining, information processing and pattern recognition. In this paper, we propose a novel initialization method for the ONMF based on the Lanczos bidiagonalization and the nonnegative approximation of rank one matrix. Numerical experiments are given to show that our initialization strategy is effective and efficient.