Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp impleme...Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp implementation of the matrix-vector multiplication(Ax)and a novel self-adaptive thread implementation of the matrix-vector multiplication(ATx),respectively,on the GPU.The vector-operation and inner-product decision trees are adopted to choose the optimal vector-operation and inner-product kernels for vectors of any size.Second,based on the above proposed kernels,the iterative shrinkage-thresholding algorithm is utilized to present two concurrent L1-min solvers from the perspective of the streams and the thread blocks on a GPU,and optimize their performance by using the new features of GPU such as the shuffle instruction and the read-only data cache.Finally,we design a concurrent L1-min solver on multiple GPUs.The experimental results have validated the high effectiveness and good performance of our proposed methods.展开更多
Compressive sensing(CS)is an emerging methodology in computational signal processing that has recently attracted intensive research activities.At present,the basic CS theory includes recoverability and stability:the f...Compressive sensing(CS)is an emerging methodology in computational signal processing that has recently attracted intensive research activities.At present,the basic CS theory includes recoverability and stability:the former quantifies the central fact that a sparse signal of length n can be exactly recovered from far fewer than n measurements via l1-minimization or other recovery techniques,while the latter specifies the stability of a recovery technique in the presence of measurement errors and inexact sparsity.So far,most analyses in CS rely heavily on the Restricted Isometry Property(RIP)for matrices.In this paper,we present an alternative,non-RIP analysis for CS via l1-minimization.Our purpose is three-fold:(a)to introduce an elementary and RIP-free treatment of the basic CS theory;(b)to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via l1-minimization;and(c)to substantiate a property called uniform recoverability of l1-minimization;that is,for almost all random measurement matrices recoverability is asymptotically identical.With the aid of two classic results,the non-RIP approach enables us to quickly derive from scratch all basic results for the extended theory.展开更多
Recently,finding the sparsest solution of an underdetermined linear system has become an important request in many areas such as compressed sensing,image processing,statistical learning,and data sparse approximation.I...Recently,finding the sparsest solution of an underdetermined linear system has become an important request in many areas such as compressed sensing,image processing,statistical learning,and data sparse approximation.In this paper,we study some theoretical properties of the solutions to a general class of0-minimization problems,which can be used to deal with many practical applications.We establish some necessary conditions for a point being the sparsest solution to this class of problems,and we also characterize the conditions for the multiplicity of the sparsest solutions to the problem.Finally,we discuss certain conditions for the boundedness of the solution set of this class of problems.展开更多
The generalized l1 greedy algorithm was recently introduced and used to reconstruct medical images in computerized tomography in the compressed sensing framework via total variation minimization. Experimental results ...The generalized l1 greedy algorithm was recently introduced and used to reconstruct medical images in computerized tomography in the compressed sensing framework via total variation minimization. Experimental results showed that this algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in reconstructing these medical images. In this paper the effectiveness of the generalized l1 greedy algorithm in finding random sparse signals from underdetermined linear systems is investigated. A series of numerical experiments demonstrate that the generalized l1 greedy algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in the successful recovery of randomly generated Gaussian sparse signals from data generated by Gaussian random matrices. In particular, the generalized l1 greedy algorithm performs extraordinarily well in recovering random sparse signals with nonzero small entries. The stability of the generalized l1 greedy algorithm with respect to its parameters and the impact of noise on the recovery of Gaussian sparse signals are also studied.展开更多
The one-bit compressed sensing problem is of fundamental importance in many areas,such as wireless communication,statistics,and so on.However,the optimization of one-bit problem coustrained on the unit sphere lacks an...The one-bit compressed sensing problem is of fundamental importance in many areas,such as wireless communication,statistics,and so on.However,the optimization of one-bit problem coustrained on the unit sphere lacks an algorithm with rigorous mathematical proof of convergence and validity.In this paper,an iteration algorithm is established based on difference-of-convex algorithm for the one-bit compressed sensing problem constrained on the unit sphere,with iterating formula■,where C is the convex cone generated by the one-bit measurements andη_(1)>η_(2)>1/2.The new algorithm is proved to converge as long as the initial point is on the unit sphere and accords with the measurements,and the convergence to the global minimum point of the l_(1)norm is discussed.展开更多
Recently, the 1-bit compressive sensing (1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or ...Recently, the 1-bit compressive sensing (1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. We first show that a necessary assumption (that has been overlooked in the literature) should be made for some existing theories and discussions for 1-bit CS. Without such an assumption, the found solution by some existing decoding algorithms might be inconsistent with 1-bit measurements. This motivates us to pursue a new direction to develop uniform and nonuniform recovery theories for 1-bit CS with a new decoding method which always generates a solution consistent with 1-bit measurements. We focus on an extreme case of 1-bit CS, in which the measurements capture only the sign of the product of a sensing matrix and a signal. We show that the 1-bit CS model can be reformulated equivalently as an t0-minimization problem with linear constraints. This reformulation naturally leads to a new linear-program-based decoding method, referred to as the 1-bit basis pursuit, which is remarkably different from existing formulations. It turns out that the uniqueness condition for the solution of the 1-bit basis pursuit yields the so-called restricted range space property (RRSP) of the transposed sensing matrix. This concept provides a basis to develop sign recovery conditions for sparse signals through 1-bit measurements. We prove that if the sign of a sparse signal can be exactly recovered from 1-bit measurements with 1-bit basis pursuit, then the sensing matrix must admit a certain RRSP, and that if the sensing matrix admits a slightly enhanced RRSP, then the sign of a k-sparse signal can be exactly recovered with 1-bit basis pursuit.展开更多
Large dimensional predictors are often introduced in regressions to attenuate the possible modeling bias. We consider the stable direction recovery in single-index models in which we solely assume the response Y is in...Large dimensional predictors are often introduced in regressions to attenuate the possible modeling bias. We consider the stable direction recovery in single-index models in which we solely assume the response Y is independent of the diverging dimensional predictors X when βτ 0 X is given, where β 0 is a p n × 1 vector, and p n →∞ as the sample size n →∞. We first explore sufficient conditions under which the least squares estimation β n0 recovers the direction β 0 consistently even when p n = o(√ n). To enhance the model interpretability by excluding irrelevant predictors in regressions, we suggest an e1-regularization algorithm with a quadratic constraint on magnitude of least squares residuals to search for a sparse estimation of β 0 . Not only can the solution β n of e1-regularization recover β 0 consistently, it also produces sufficiently sparse estimators which enable us to select "important" predictors to facilitate the model interpretation while maintaining the prediction accuracy. Further analysis by simulations and an application to the car price data suggest that our proposed estimation procedures have good finite-sample performance and are computationally efficient.展开更多
Based on sparse information recovery,we develop a new method for locating multiple multiscale acoustic scatterers.Firstly,with the prior information of the scatterers’shape,we reformulate the location identification ...Based on sparse information recovery,we develop a new method for locating multiple multiscale acoustic scatterers.Firstly,with the prior information of the scatterers’shape,we reformulate the location identification problem into a sparse information recovery model which brought the power of sparse recovery method into this type of inverse scattering problems.Specifically,the new model can advance the judgment of the existence of alternative scatterers and,in the meantime,conclude the number and locating of each existing scatterers.Secondly,as well known,the core model(l0-minimization)in sparse information recovery is an NP-hard problem.According to the characteristics of the proposed sparse model,we present a new substitute method and give a detailed theoretical analysis of the new substitute model.Relying on the properties of the new model,we construct a basic algorithm and an improved one.Finally,we verify the validity of the proposed method through two numerical experiments.展开更多
A system receives shocks at successive random points of discrete time, and each shock causes a positive integer-valued random amount of damage which accumulates on the system one after another. The system is subject t...A system receives shocks at successive random points of discrete time, and each shock causes a positive integer-valued random amount of damage which accumulates on the system one after another. The system is subject to failure and it fails once the total cumulative damage level first exceeds a fixed threshold. Upon failure the system must be replaced by a new and identical one and a cost is incurred. If the system is replaced before failure, a lower cost is incurred.On the basis of some assumptions, we specify a replacement rule which minimizes the longrun (expected) average cost per unit time and possesses the control limit property, Finally, an algorithm is discussed in a special case.展开更多
In this paper, we introduce the definition of (m, n)0-regularity in Г-semigroups. we in- vestigate and characterize the 20-regular class of F-semigroups using Green's relations. Extending and generalizing the Croi...In this paper, we introduce the definition of (m, n)0-regularity in Г-semigroups. we in- vestigate and characterize the 20-regular class of F-semigroups using Green's relations. Extending and generalizing the Croisot's Theory of Decomposition for F-semigroups, we introduce and study the absorbent and regular absorbent Г-semigroups. We approach this problem by examining quasi-ideals using Green's relations.展开更多
基金The research has been supported by the Natural Science Foundation of China under great number 61872422the Natural Science Foundation of Zhejiang Province,China under great number LY19F020028.
文摘Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp implementation of the matrix-vector multiplication(Ax)and a novel self-adaptive thread implementation of the matrix-vector multiplication(ATx),respectively,on the GPU.The vector-operation and inner-product decision trees are adopted to choose the optimal vector-operation and inner-product kernels for vectors of any size.Second,based on the above proposed kernels,the iterative shrinkage-thresholding algorithm is utilized to present two concurrent L1-min solvers from the perspective of the streams and the thread blocks on a GPU,and optimize their performance by using the new features of GPU such as the shuffle instruction and the read-only data cache.Finally,we design a concurrent L1-min solver on multiple GPUs.The experimental results have validated the high effectiveness and good performance of our proposed methods.
文摘Compressive sensing(CS)is an emerging methodology in computational signal processing that has recently attracted intensive research activities.At present,the basic CS theory includes recoverability and stability:the former quantifies the central fact that a sparse signal of length n can be exactly recovered from far fewer than n measurements via l1-minimization or other recovery techniques,while the latter specifies the stability of a recovery technique in the presence of measurement errors and inexact sparsity.So far,most analyses in CS rely heavily on the Restricted Isometry Property(RIP)for matrices.In this paper,we present an alternative,non-RIP analysis for CS via l1-minimization.Our purpose is three-fold:(a)to introduce an elementary and RIP-free treatment of the basic CS theory;(b)to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via l1-minimization;and(c)to substantiate a property called uniform recoverability of l1-minimization;that is,for almost all random measurement matrices recoverability is asymptotically identical.With the aid of two classic results,the non-RIP approach enables us to quickly derive from scratch all basic results for the extended theory.
文摘Recently,finding the sparsest solution of an underdetermined linear system has become an important request in many areas such as compressed sensing,image processing,statistical learning,and data sparse approximation.In this paper,we study some theoretical properties of the solutions to a general class of0-minimization problems,which can be used to deal with many practical applications.We establish some necessary conditions for a point being the sparsest solution to this class of problems,and we also characterize the conditions for the multiplicity of the sparsest solutions to the problem.Finally,we discuss certain conditions for the boundedness of the solution set of this class of problems.
文摘The generalized l1 greedy algorithm was recently introduced and used to reconstruct medical images in computerized tomography in the compressed sensing framework via total variation minimization. Experimental results showed that this algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in reconstructing these medical images. In this paper the effectiveness of the generalized l1 greedy algorithm in finding random sparse signals from underdetermined linear systems is investigated. A series of numerical experiments demonstrate that the generalized l1 greedy algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in the successful recovery of randomly generated Gaussian sparse signals from data generated by Gaussian random matrices. In particular, the generalized l1 greedy algorithm performs extraordinarily well in recovering random sparse signals with nonzero small entries. The stability of the generalized l1 greedy algorithm with respect to its parameters and the impact of noise on the recovery of Gaussian sparse signals are also studied.
基金supported by the National Natural Science Foundation of China(Nos.12171496,12171490,11971491 and U1811461)Guangdong Basic and Applied Basic Research Foundation(2024A1515012057)。
文摘The one-bit compressed sensing problem is of fundamental importance in many areas,such as wireless communication,statistics,and so on.However,the optimization of one-bit problem coustrained on the unit sphere lacks an algorithm with rigorous mathematical proof of convergence and validity.In this paper,an iteration algorithm is established based on difference-of-convex algorithm for the one-bit compressed sensing problem constrained on the unit sphere,with iterating formula■,where C is the convex cone generated by the one-bit measurements andη_(1)>η_(2)>1/2.The new algorithm is proved to converge as long as the initial point is on the unit sphere and accords with the measurements,and the convergence to the global minimum point of the l_(1)norm is discussed.
基金supported by the Engineering and Physical Sciences Research Council of UK (Grant No. #EP/K00946X/1)
文摘Recently, the 1-bit compressive sensing (1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. We first show that a necessary assumption (that has been overlooked in the literature) should be made for some existing theories and discussions for 1-bit CS. Without such an assumption, the found solution by some existing decoding algorithms might be inconsistent with 1-bit measurements. This motivates us to pursue a new direction to develop uniform and nonuniform recovery theories for 1-bit CS with a new decoding method which always generates a solution consistent with 1-bit measurements. We focus on an extreme case of 1-bit CS, in which the measurements capture only the sign of the product of a sensing matrix and a signal. We show that the 1-bit CS model can be reformulated equivalently as an t0-minimization problem with linear constraints. This reformulation naturally leads to a new linear-program-based decoding method, referred to as the 1-bit basis pursuit, which is remarkably different from existing formulations. It turns out that the uniqueness condition for the solution of the 1-bit basis pursuit yields the so-called restricted range space property (RRSP) of the transposed sensing matrix. This concept provides a basis to develop sign recovery conditions for sparse signals through 1-bit measurements. We prove that if the sign of a sparse signal can be exactly recovered from 1-bit measurements with 1-bit basis pursuit, then the sensing matrix must admit a certain RRSP, and that if the sensing matrix admits a slightly enhanced RRSP, then the sign of a k-sparse signal can be exactly recovered with 1-bit basis pursuit.
基金supported by National Natural Science Foundation of China (Grant No. 10701035)Chen Guang Project of Shanghai Education Development Foundation (Grant No. 2007CG33)+1 种基金supported by Research Grants Council of Hong KongFaculty Research Grant from Hong Kong Baptist University
文摘Large dimensional predictors are often introduced in regressions to attenuate the possible modeling bias. We consider the stable direction recovery in single-index models in which we solely assume the response Y is independent of the diverging dimensional predictors X when βτ 0 X is given, where β 0 is a p n × 1 vector, and p n →∞ as the sample size n →∞. We first explore sufficient conditions under which the least squares estimation β n0 recovers the direction β 0 consistently even when p n = o(√ n). To enhance the model interpretability by excluding irrelevant predictors in regressions, we suggest an e1-regularization algorithm with a quadratic constraint on magnitude of least squares residuals to search for a sparse estimation of β 0 . Not only can the solution β n of e1-regularization recover β 0 consistently, it also produces sufficiently sparse estimators which enable us to select "important" predictors to facilitate the model interpretation while maintaining the prediction accuracy. Further analysis by simulations and an application to the car price data suggest that our proposed estimation procedures have good finite-sample performance and are computationally efficient.
基金partially supported by the NSFC(Nos.11771347,11871392)partially supported by the Major projects of the NSFC(Nos.91730306,41390450,41390454)partially supported by the National Science and Technology Major project(Nos.2016ZX05024-001-007 and 2017ZX050609)。
文摘Based on sparse information recovery,we develop a new method for locating multiple multiscale acoustic scatterers.Firstly,with the prior information of the scatterers’shape,we reformulate the location identification problem into a sparse information recovery model which brought the power of sparse recovery method into this type of inverse scattering problems.Specifically,the new model can advance the judgment of the existence of alternative scatterers and,in the meantime,conclude the number and locating of each existing scatterers.Secondly,as well known,the core model(l0-minimization)in sparse information recovery is an NP-hard problem.According to the characteristics of the proposed sparse model,we present a new substitute method and give a detailed theoretical analysis of the new substitute model.Relying on the properties of the new model,we construct a basic algorithm and an improved one.Finally,we verify the validity of the proposed method through two numerical experiments.
文摘A system receives shocks at successive random points of discrete time, and each shock causes a positive integer-valued random amount of damage which accumulates on the system one after another. The system is subject to failure and it fails once the total cumulative damage level first exceeds a fixed threshold. Upon failure the system must be replaced by a new and identical one and a cost is incurred. If the system is replaced before failure, a lower cost is incurred.On the basis of some assumptions, we specify a replacement rule which minimizes the longrun (expected) average cost per unit time and possesses the control limit property, Finally, an algorithm is discussed in a special case.
文摘In this paper, we introduce the definition of (m, n)0-regularity in Г-semigroups. we in- vestigate and characterize the 20-regular class of F-semigroups using Green's relations. Extending and generalizing the Croisot's Theory of Decomposition for F-semigroups, we introduce and study the absorbent and regular absorbent Г-semigroups. We approach this problem by examining quasi-ideals using Green's relations.