Wavelet, a powerful tool for signal processing, can be used to approximate the target func-tion. For enhancing the sparse property of wavelet approximation, a new algorithm was proposed by using wavelet kernel Support...Wavelet, a powerful tool for signal processing, can be used to approximate the target func-tion. For enhancing the sparse property of wavelet approximation, a new algorithm was proposed by using wavelet kernel Support Vector Machines (SVM), which can converge to minimum error with bet-ter sparsity. Here, wavelet functions would be firstly used to construct the admitted kernel for SVM according to Mercy theory; then new SVM with this kernel can be used to approximate the target fun-citon with better sparsity than wavelet approxiamtion itself. The results obtained by our simulation ex-periment show the feasibility and validity of wavelet kernel support vector machines.展开更多
One of the open problems in the field of forward uncertainty quantification(UQ)is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs.Anot...One of the open problems in the field of forward uncertainty quantification(UQ)is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs.Another challenge is to efficiently make use of limited training data for UQ predictions of complex engineering problems,particularly with high dimensional random parameters.We address these challenges by combining data-driven polynomial chaos expansions with a recently developed preconditioned sparse approximation approach for UQ problems.The first task in this two-step process is to employ the procedure developed in[1]to construct an"arbitrary"polynomial chaos expansion basis using a finite number of statistical moments of the random inputs.The second step is a novel procedure to effect sparse approximation via l1 minimization in order to quantify the forward uncertainty.To enhance the performance of the preconditioned l1 minimization problem,we sample from the so-called induced distribution,instead of using Monte Carlo(MC)sampling from the original,unknown probability measure.We demonstrate on test problems that induced sampling is a competitive and often better choice compared with sampling from asymptotically optimal measures(such as the equilibrium measure)when we have incomplete information about the distribution.We demonstrate the capacity of the proposed induced sampling algorithm via sparse representation with limited data on test functions,and on a Kirchoff plating bending problem with random Young’s modulus.展开更多
The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to ap...The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.展开更多
Considerable attempts have been made on removing the crosstalk noise in a simultaneous source data using the popular K-means Singular Value Decomposition algorithm(KSVD).Several hybrids of this method have been design...Considerable attempts have been made on removing the crosstalk noise in a simultaneous source data using the popular K-means Singular Value Decomposition algorithm(KSVD).Several hybrids of this method have been designed and successfully deployed,but the complex nature of blending noise makes it difficult to manipulate easily.One of the challenges of the K-means Singular Value Decomposition approach is the challenge to obtain an exact KSVD for each data patch which is believed to result in a better output.In this work,we propose a learnable architecture capable of data training while retaining the K-means Singular Value Decomposition essence to deblend simultaneous source data.展开更多
The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potentia...The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potential scatters' positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed 10 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.展开更多
In order to solve the problem of the invalidation of thermal parameters andoptimal running, we present an efficient soft sensor approach based on sparse online Gaussianprocesses( GP), which is based on a combination o...In order to solve the problem of the invalidation of thermal parameters andoptimal running, we present an efficient soft sensor approach based on sparse online Gaussianprocesses( GP), which is based on a combination of a Bayesian online algorithm together with asequential construction of a relevant subsample of the data to specify the prediction of the GPmodel. By an appealing parameterization and projection techniques that use the reproducing kernelHubert space (RKHS) norm, recursions for the effective parameters and a sparse Gaussianapproximation of the posterior process are obtained. The sparse representation of Gaussian processesmakes the GP-based soft sensor practical in a large dataset and real-time application. And theproposed thermalparameter soft sensor is of importance for the economical running of the powerplant.展开更多
In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the neces...In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the necessary wavelet coefficients with a mean shift based algorithm, and concentrates energy on the selected coefficients. It can sparsely approximate the original image, and converges faster than the existing local competition based method. Then, we propose a new compression scheme based on the above approximation method. The scheme has compression performance similar to JPEG 2000. The images decoded with the proposed compression scheme appear more pleasant to the human eyes than those with JPEG 2000.展开更多
In this paper, motivated by the results in compressive phase retrieval, we study the robustness properties of dimensionality reduction with Gaussian random matrices having arbitrarily erased rows. We first study the r...In this paper, motivated by the results in compressive phase retrieval, we study the robustness properties of dimensionality reduction with Gaussian random matrices having arbitrarily erased rows. We first study the robustness property against erasure for the almost norm preservation property of Gaussian random matrices by obtaining the optimal estimate of the erasure ratio for a small given norm distortion rate. As a consequence, we establish the robustness property of Johnson-Lindenstrauss lemma and the robustness property of restricted isometry property with corruption for Gaussian random matrices. Secondly, we obtain a sharp estimate for the optimal lower and upper bounds of norm distortion rates of Gaussian random matrices under a given erasure ratio. This allows us to establish the strong restricted isometry property with the almost optimal restricted isometry property(RIP) constants, which plays a central role in the study of phaseless compressed sensing. As a byproduct of our results, we also establish the robustness property of Gaussian random finite frames under erasure.展开更多
The method of data-driven tight frame has been shown very useful in image restoration problems.We consider in this paper extending this important technique,by incorporating L_(1) data fidelity into the original data-d...The method of data-driven tight frame has been shown very useful in image restoration problems.We consider in this paper extending this important technique,by incorporating L_(1) data fidelity into the original data-driven model,for removing impulsive noise which is a very common and basic type of noise in image data.The model contains three variables and can be solved through an efficient iterative alternating minimization algorithm in patch implementation,where the tight frame is dynamically updated.It constructs a tight frame system from the input corrupted image adaptively,and then removes impulsive noise by the derived system.We also show that the sequence generated by our algorithm converges globally to a stationary point of the optimization model.Numerical experiments and comparisons demonstrate that our approach performs well for various kinds of images.This benefits from its data-driven nature and the learned tight frames from input images capture richer image structures adaptively.展开更多
In quantitative susceptibility mapping(QSM),the background field removal is an essential data acquisition step because it has a significant effect on the restoration quality by generating a harmonic incompatibility in...In quantitative susceptibility mapping(QSM),the background field removal is an essential data acquisition step because it has a significant effect on the restoration quality by generating a harmonic incompatibility in the measured local field data.Even though the sparsity based first generation harmonic incompatibility removal(1GHIRE)model has achieved the performance gain over the traditional approaches,the 1GHIRE model has to be further improved as there is a basis mismatch underlying in numerically solving Poisson’s equation for the background removal.In this paper,we propose the second generation harmonic incompatibility removal(2GHIRE)model to reduce a basis mismatch,inspired by the balanced approach in the tight frame based image restoration.Experimental results shows the superiority of the proposed 2GHIRE model both in the restoration qualities and the computational efficiency.展开更多
This paper presents a new method for the estimation of the injection state and power factor of distributed energy resources (DERs) using voltage magnitude measurements only. A physics-based linear model is used to dev...This paper presents a new method for the estimation of the injection state and power factor of distributed energy resources (DERs) using voltage magnitude measurements only. A physics-based linear model is used to develop estimation heuristics for net injections of real and reactive power at a set of buses under study, allowing a distribution engineer to form a robust estimate for the operating state and the power factor of the DER at those buses. The method demonstrates and exploits a mathematical distinction between the voltage sensitivity signatures of real and reactive power injections for a fixed power system model. Case studies on various test feeders for a model of the distribution circuit and statistical analyses are presented to demonstrate the validity of the estimation method. The results of this paper can be used to improve the limited information about inverter parameters and operating state during renewable planning, which helps mitigate the uncertainty inherent in their integration.展开更多
In this paper,a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented.As with Monte-Carlo and stochastic collocation methods,only point-wise evaluations of ...In this paper,a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented.As with Monte-Carlo and stochastic collocation methods,only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest.The new approach differs from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory.The technique allows the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information.The generation of this information,via many solver calls,is almost always the bottle-neck of an uncertainty quantification procedure.If the stochastic model output has a reasonably compressible representation in the retained approximation basis,the proposedmethod makes the best use of the available information and retrieves the dominantmodes.Uncertainty quantification of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the significant performance improvement of the new method,requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos(Smolyak scheme)to achieve comparable approximation accuracy.展开更多
To improve the performance of sound source localization based on distributed microphone arrays in noisy and reverberant environments,a sound source localization method was proposed.This method exploited the inherent s...To improve the performance of sound source localization based on distributed microphone arrays in noisy and reverberant environments,a sound source localization method was proposed.This method exploited the inherent spatial sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing(CS) theory.In this method two-step discrete cosine transform(DCT)-based feature extraction was utilized to cover both short-time and long-time properties of the signal and reduce the dimensions of the sparse model.Moreover,an online dictionary learning(DL) method was used to dynamically adjust the dictionary for matching the changes of audio signals,and then the sparse solution could better represent location estimations.In addition,we proposed an improved approximate l_0norm minimization algorithm to enhance reconstruction performance for sparse signals in low signal-noise ratio(SNR).The effectiveness of the proposed scheme is demonstrated by simulation results where the locations of multiple sources can be obtained in the noisy and reverberant conditions.展开更多
文摘Wavelet, a powerful tool for signal processing, can be used to approximate the target func-tion. For enhancing the sparse property of wavelet approximation, a new algorithm was proposed by using wavelet kernel Support Vector Machines (SVM), which can converge to minimum error with bet-ter sparsity. Here, wavelet functions would be firstly used to construct the admitted kernel for SVM according to Mercy theory; then new SVM with this kernel can be used to approximate the target fun-citon with better sparsity than wavelet approxiamtion itself. The results obtained by our simulation ex-periment show the feasibility and validity of wavelet kernel support vector machines.
基金supported by the NSF of China(No.11671265)partially supported by NSF DMS-1848508+4 种基金partially supported by the NSF of China(under grant numbers 11688101,11571351,and 11731006)science challenge project(No.TZ2018001)the youth innovation promotion association(CAS)supported by the National Science Foundation under Grant No.DMS-1439786the Simons Foundation Grant No.50736。
文摘One of the open problems in the field of forward uncertainty quantification(UQ)is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs.Another challenge is to efficiently make use of limited training data for UQ predictions of complex engineering problems,particularly with high dimensional random parameters.We address these challenges by combining data-driven polynomial chaos expansions with a recently developed preconditioned sparse approximation approach for UQ problems.The first task in this two-step process is to employ the procedure developed in[1]to construct an"arbitrary"polynomial chaos expansion basis using a finite number of statistical moments of the random inputs.The second step is a novel procedure to effect sparse approximation via l1 minimization in order to quantify the forward uncertainty.To enhance the performance of the preconditioned l1 minimization problem,we sample from the so-called induced distribution,instead of using Monte Carlo(MC)sampling from the original,unknown probability measure.We demonstrate on test problems that induced sampling is a competitive and often better choice compared with sampling from asymptotically optimal measures(such as the equilibrium measure)when we have incomplete information about the distribution.We demonstrate the capacity of the proposed induced sampling algorithm via sparse representation with limited data on test functions,and on a Kirchoff plating bending problem with random Young’s modulus.
基金partially supported by European Research Council,the National Natural Science Foundation of China(No.11201079)the Fundamental Research Funds for the Central Universities of China(Nos.20520133238 and 20520131169)the National Natural Science Foundation of United States(Nos.DMS-0748839 and DMS-1317602).
文摘The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.
基金Supported by State Key Research and Development Program of China(No.2018YFC0310104)National Natural Science Foundation of China(Nos.41974163,4213080)。
文摘Considerable attempts have been made on removing the crosstalk noise in a simultaneous source data using the popular K-means Singular Value Decomposition algorithm(KSVD).Several hybrids of this method have been designed and successfully deployed,but the complex nature of blending noise makes it difficult to manipulate easily.One of the challenges of the K-means Singular Value Decomposition approach is the challenge to obtain an exact KSVD for each data patch which is believed to result in a better output.In this work,we propose a learnable architecture capable of data training while retaining the K-means Singular Value Decomposition essence to deblend simultaneous source data.
基金Project(61171133)supported by the National Natural Science Foundation of ChinaProject(11JJ1010)supported by the Natural Science Fund for Distinguished Young Scholars of Hunan Province,ChinaProject(61101182)supported by National Natural Science Foundation for Young Scientists of China
文摘The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potential scatters' positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed 10 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.
文摘In order to solve the problem of the invalidation of thermal parameters andoptimal running, we present an efficient soft sensor approach based on sparse online Gaussianprocesses( GP), which is based on a combination of a Bayesian online algorithm together with asequential construction of a relevant subsample of the data to specify the prediction of the GPmodel. By an appealing parameterization and projection techniques that use the reproducing kernelHubert space (RKHS) norm, recursions for the effective parameters and a sparse Gaussianapproximation of the posterior process are obtained. The sparse representation of Gaussian processesmakes the GP-based soft sensor practical in a large dataset and real-time application. And theproposed thermalparameter soft sensor is of importance for the economical running of the powerplant.
文摘In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the necessary wavelet coefficients with a mean shift based algorithm, and concentrates energy on the selected coefficients. It can sparsely approximate the original image, and converges faster than the existing local competition based method. Then, we propose a new compression scheme based on the above approximation method. The scheme has compression performance similar to JPEG 2000. The images decoded with the proposed compression scheme appear more pleasant to the human eyes than those with JPEG 2000.
基金supported by Natural Sciences and Engineering Research Council of Canada (Grant No. 05865)Zhiqiang Xu was supported by National Natural Science Foundation of China (Grant Nos. 11422113, 91630203, 11021101 and 11331012)National Basic Research Program of China (973 Program) (Grant No. 2015CB856000)
文摘In this paper, motivated by the results in compressive phase retrieval, we study the robustness properties of dimensionality reduction with Gaussian random matrices having arbitrarily erased rows. We first study the robustness property against erasure for the almost norm preservation property of Gaussian random matrices by obtaining the optimal estimate of the erasure ratio for a small given norm distortion rate. As a consequence, we establish the robustness property of Johnson-Lindenstrauss lemma and the robustness property of restricted isometry property with corruption for Gaussian random matrices. Secondly, we obtain a sharp estimate for the optimal lower and upper bounds of norm distortion rates of Gaussian random matrices under a given erasure ratio. This allows us to establish the strong restricted isometry property with the almost optimal restricted isometry property(RIP) constants, which plays a central role in the study of phaseless compressed sensing. As a byproduct of our results, we also establish the robustness property of Gaussian random finite frames under erasure.
基金supports from NSF of China grants 11531013 and 11871035.
文摘The method of data-driven tight frame has been shown very useful in image restoration problems.We consider in this paper extending this important technique,by incorporating L_(1) data fidelity into the original data-driven model,for removing impulsive noise which is a very common and basic type of noise in image data.The model contains three variables and can be solved through an efficient iterative alternating minimization algorithm in patch implementation,where the tight frame is dynamically updated.It constructs a tight frame system from the input corrupted image adaptively,and then removes impulsive noise by the derived system.We also show that the sequence generated by our algorithm converges globally to a stationary point of the optimization model.Numerical experiments and comparisons demonstrate that our approach performs well for various kinds of images.This benefits from its data-driven nature and the learned tight frames from input images capture richer image structures adaptively.
基金The research of the first author is supported in part by the NSFC Youth Program 11901338The research of the second author is supported by the Hong Kong Research Grant Council(HKRGC)GRF 16306317 and 16309219+2 种基金The research of the third author is supported by the NSFC Youth Program 11901436 and the Fundamental Research Program of Science and Technology Commission of Shanghai Municipality(20JC1413500)The research of the fourth author is supported by the NSFC grant 11831002The research of the fifth author is supported by the National Natural Science Foundation of China Youth Program grant 11801088 and the Shanghai Sailing Program(18YF1401600).
文摘In quantitative susceptibility mapping(QSM),the background field removal is an essential data acquisition step because it has a significant effect on the restoration quality by generating a harmonic incompatibility in the measured local field data.Even though the sparsity based first generation harmonic incompatibility removal(1GHIRE)model has achieved the performance gain over the traditional approaches,the 1GHIRE model has to be further improved as there is a basis mismatch underlying in numerically solving Poisson’s equation for the background removal.In this paper,we propose the second generation harmonic incompatibility removal(2GHIRE)model to reduce a basis mismatch,inspired by the balanced approach in the tight frame based image restoration.Experimental results shows the superiority of the proposed 2GHIRE model both in the restoration qualities and the computational efficiency.
基金This material is based upon the work supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Number 34226Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DENA0003525.
文摘This paper presents a new method for the estimation of the injection state and power factor of distributed energy resources (DERs) using voltage magnitude measurements only. A physics-based linear model is used to develop estimation heuristics for net injections of real and reactive power at a set of buses under study, allowing a distribution engineer to form a robust estimate for the operating state and the power factor of the DER at those buses. The method demonstrates and exploits a mathematical distinction between the voltage sensitivity signatures of real and reactive power injections for a fixed power system model. Case studies on various test feeders for a model of the distribution circuit and statistical analyses are presented to demonstrate the validity of the estimation method. The results of this paper can be used to improve the limited information about inverter parameters and operating state during renewable planning, which helps mitigate the uncertainty inherent in their integration.
基金supported by the French National Agency for Research(ANR)under projects ASRMEI JC08#375619 and CORMORED ANR-08-BLAN-0115 and by GdR Mo-MaS.
文摘In this paper,a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented.As with Monte-Carlo and stochastic collocation methods,only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest.The new approach differs from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory.The technique allows the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information.The generation of this information,via many solver calls,is almost always the bottle-neck of an uncertainty quantification procedure.If the stochastic model output has a reasonably compressible representation in the retained approximation basis,the proposedmethod makes the best use of the available information and retrieves the dominantmodes.Uncertainty quantification of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the significant performance improvement of the new method,requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos(Smolyak scheme)to achieve comparable approximation accuracy.
基金supported by the Doctoral Program of Higher Education of China(20133207120007)the National Natural Science Foundation of China(61405094)+1 种基金the Open Research Fund of Jiangsu Key Laboratory of Meteorological Observation and Information Processing(KDXS1408)the Science and Technology Support Project of Jiangsu Province-Industry(BE2014139)
文摘To improve the performance of sound source localization based on distributed microphone arrays in noisy and reverberant environments,a sound source localization method was proposed.This method exploited the inherent spatial sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing(CS) theory.In this method two-step discrete cosine transform(DCT)-based feature extraction was utilized to cover both short-time and long-time properties of the signal and reduce the dimensions of the sparse model.Moreover,an online dictionary learning(DL) method was used to dynamically adjust the dictionary for matching the changes of audio signals,and then the sparse solution could better represent location estimations.In addition,we proposed an improved approximate l_0norm minimization algorithm to enhance reconstruction performance for sparse signals in low signal-noise ratio(SNR).The effectiveness of the proposed scheme is demonstrated by simulation results where the locations of multiple sources can be obtained in the noisy and reverberant conditions.