We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a ...We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a special case,a lower bound for preconditioners defined via the method of successive subspace corrections.展开更多
This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order a...This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function.Under a local Lipschitzian error bound assumption,we show that the algorithm possesses global and local linear convergence properties.We also give some numerical tests(including image recovery examples) to illustrate the efficiency of the proposed method.展开更多
In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image pro...In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image processing,machine learning and so on.The VMEPIHT method is based on the forward-backward splitting(FBS)method,and variable metric strategy is employed in the extrapolation step to speed up the algorithm.The proposed method’s convergence,linear convergence rate and superlinear convergence rate are shown under appropriate assumptions.Finally,we conduct numerical experiments on compressed sensing problem and CT image reconstruction problem to confirm the efficiency of the proposed method,compared with other state-of-the-art methods.展开更多
This paper develops distributed algorithms for solving Sylvester equations.The authors transform solving Sylvester equations into a distributed optimization problem,unifying all eight standard distributed matrix struc...This paper develops distributed algorithms for solving Sylvester equations.The authors transform solving Sylvester equations into a distributed optimization problem,unifying all eight standard distributed matrix structures.Then the authors propose a distributed algorithm to find the least squares solution and achieve an explicit linear convergence rate.These results are obtained by carefully choosing the step-size of the algorithm,which requires particular information of data and Laplacian matrices.To avoid these centralized quantities,the authors further develop a distributed scaling technique by using local information only.As a result,the proposed distributed algorithm along with the distributed scaling design yields a universal method for solving Sylvester equations over a multi-agent network with the constant step-size freely chosen from configurable intervals.Finally,the authors provide three examples to illustrate the effectiveness of the proposed algorithms.展开更多
A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimen...A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.展开更多
文摘We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a special case,a lower bound for preconditioners defined via the method of successive subspace corrections.
基金supported by NSFC Grant 10601043,NCETXMUSRF for ROCS,SEM+2 种基金supported by RGC 201508HKBU FRGssupported by the Hong Kong Research Grant Council
文摘This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function.Under a local Lipschitzian error bound assumption,we show that the algorithm possesses global and local linear convergence properties.We also give some numerical tests(including image recovery examples) to illustrate the efficiency of the proposed method.
基金supported by the National Natural Science Foundation of China(No.11901368).
文摘In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image processing,machine learning and so on.The VMEPIHT method is based on the forward-backward splitting(FBS)method,and variable metric strategy is employed in the extrapolation step to speed up the algorithm.The proposed method’s convergence,linear convergence rate and superlinear convergence rate are shown under appropriate assumptions.Finally,we conduct numerical experiments on compressed sensing problem and CT image reconstruction problem to confirm the efficiency of the proposed method,compared with other state-of-the-art methods.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.62103003,72171171,62073035,61973002in part by the Anhui Provincial Natural Science Foundation under Grant No.2008085J32。
文摘This paper develops distributed algorithms for solving Sylvester equations.The authors transform solving Sylvester equations into a distributed optimization problem,unifying all eight standard distributed matrix structures.Then the authors propose a distributed algorithm to find the least squares solution and achieve an explicit linear convergence rate.These results are obtained by carefully choosing the step-size of the algorithm,which requires particular information of data and Laplacian matrices.To avoid these centralized quantities,the authors further develop a distributed scaling technique by using local information only.As a result,the proposed distributed algorithm along with the distributed scaling design yields a universal method for solving Sylvester equations over a multi-agent network with the constant step-size freely chosen from configurable intervals.Finally,the authors provide three examples to illustrate the effectiveness of the proposed algorithms.
文摘A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.