In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluste...In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.展开更多
We discuss freezing of quantum imaginarity based onℓ_(1)-norm.Several properties about a quantity of imaginarity based onℓ_(1)-norm are revealed.For a qubit(2-dimensional)system,we characterize the structure of real q...We discuss freezing of quantum imaginarity based onℓ_(1)-norm.Several properties about a quantity of imaginarity based onℓ_(1)-norm are revealed.For a qubit(2-dimensional)system,we characterize the structure of real quantum operations that allow for freezing the quantity of imaginarity of any state.Furthermore,we characterize the structure of local real operations which can freeze the quantity of imaginarity of a class of N-qubit quantum states.展开更多
By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-...By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-layer feedforward regular fuzzy neural networks to the fuzzy valued integrably bounded function F : Rn → FcO(R). That is, if the transfer functionσ: R→R is non-polynomial and integrable function on each finite interval, F may be innorm approximated by fuzzy valued functions defined as to anydegree of accuracy. Finally some real examples demonstrate the conclusions.展开更多
With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direc...With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.展开更多
A neural network with a feed forward topology and Bayesian regularization training algorithm is used to predict the austenite formation temperatures (At1 and A13) by considering the percentage of alloying elements i...A neural network with a feed forward topology and Bayesian regularization training algorithm is used to predict the austenite formation temperatures (At1 and A13) by considering the percentage of alloying elements in chemical composition of steel. The data base used here involves a large variety of different steel types such as struc- tural steels, stainless steels, rail steels, spring steels, high temperature creep resisting steels and tool steels. Scatter diagrams and mean relative error (MRE) statistical criteria are used to compare the performance of developed neural network with the results of Andrew% empirical equations and a feed forward neural network with "gradient descent with momentum" training algorithm. The results showed that Bayesian regularization neural network has the best performance. Also, due to the satisfactory results of the developed neural network, it was used to investigate the effect of the chemical composition on Ac1 and At3 temperatures. Results are in accordance with materials science theories.展开更多
Bioluminescence tomography(BLT)is an important noninvasive optical molecular imaging modality in preclinical research.To improve the image quality,reconstruction algorithms have to deal with the inherent ill-posedness...Bioluminescence tomography(BLT)is an important noninvasive optical molecular imaging modality in preclinical research.To improve the image quality,reconstruction algorithms have to deal with the inherent ill-posedness of BLT inverse problem.The sparse characteristic of bioluminescent sources in spatial distribution has been widely explored in BLT and many L1-regularized methods have been investigated due to the sparsity-inducing properties of L1 norm.In this paper,we present a reconstruction method based on L_(1/2) regularization to enhance sparsity of BLT solution and solve the nonconvex L_(1/2) norm problem by converting it to a series of weighted L1 homotopy minimization problems with iteratively updated weights.To assess the performance of the proposed reconstruction algorithm,simulations on a heterogeneous mouse model are designed to compare it with three representative sparse reconstruction algorithms,including the weighted interior-point,L1 homotopy,and the Stagewise Orthogonal Matching Pursuit algorithm.Simulation results show that the proposed method yield stable reconstruction results under different noise levels.Quantitative comparison results demonstrate that the proposed algorithm outperforms the competitor algorithms in location accuracy,multiple-source resolving and image quality.展开更多
Tomographic synthetic aperture radar(TomoSAR)imaging exploits the antenna array measurements taken at different elevation aperture to recover the reflectivity function along the elevation direction.In these years,for ...Tomographic synthetic aperture radar(TomoSAR)imaging exploits the antenna array measurements taken at different elevation aperture to recover the reflectivity function along the elevation direction.In these years,for the sparse elevation distribution,compressive sensing(CS)is a developed favorable technique for the high-resolution elevation reconstruction in TomoSAR by solving an L_(1) regularization problem.However,because the elevation distribution in the forested area is nonsparse,if we want to use CS in the recovery,some basis,such as wavelet,should be exploited in the sparse L_(1/2) representation of the elevation reflectivity function.This paper presents a novel wavelet-based L_(2) regularization CS-TomoSAR imaging method of the forested area.In the proposed method,we first construct a wavelet basis,which can sparsely represent the elevation reflectivity function of the forested area,and then reconstruct the elevation distribution by using the L_(1/2) regularization technique.Compared to the wavelet-based L_(1) regularization TomoSAR imaging,the proposed method can improve the elevation recovered quality efficiently.展开更多
On a compact Riemann surface with finite punctures P_(1),…P_(k),we define toric curves as multivalued,totallyunramified holomorphic maps to P^(n)with monodromy in a maximal torus of PSU(n+1).Toric solutions to SU(n+1...On a compact Riemann surface with finite punctures P_(1),…P_(k),we define toric curves as multivalued,totallyunramified holomorphic maps to P^(n)with monodromy in a maximal torus of PSU(n+1).Toric solutions to SU(n+1)Todasystems on X\{P_(1);…;P_(k)}are recognized by the associated toric curves in.We introduce character n-ensembles as-tuples of meromorphic one-forms with simple poles and purely imaginary periods,generating toric curves on minus finitelymany points.On X,we establish a correspondence between character-ensembles and toric solutions to the SU(n+1)system with finitely many cone singularities.Our approach not only broadens seminal solutions with two conesingularities on the Riemann sphere,as classified by Jost-Wang(Int.Math.Res.Not.,2002,(6):277-290)andLin-Wei-Ye(Invent.Math.,2012,190(1):169-207),but also advances beyond the limits of Lin-Yang-Zhong’s existencetheorems(J.Differential Geom.,2020,114(2):337-391)by introducing a new solution class.展开更多
Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empir...Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection.展开更多
In this paper, polynomial fuzzy neural network classifiers (PFNNCs) is proposed by means of density fuzzy c-means and L2-norm regularization. The overall design of PFNNCs was realized by means of fuzzy rules that come...In this paper, polynomial fuzzy neural network classifiers (PFNNCs) is proposed by means of density fuzzy c-means and L2-norm regularization. The overall design of PFNNCs was realized by means of fuzzy rules that come in form of three parts, namely premise part, consequence part and aggregation part. The premise part was developed by density fuzzy c-means that helps determine the apex parameters of membership functions, while the consequence part was realized by means of two types of polynomials including linear and quadratic. L2-norm regularization that can alleviate the overfitting problem was exploited to estimate the parameters of polynomials, which constructed the aggregation part. Experimental results of several data sets demonstrate that the proposed classifiers show higher classification accuracy in comparison with some other classifiers reported in the literature.展开更多
In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm r...In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm regularization, and use alternating optimization to directly estimate the primary reflection coefficients and source wavelet. The 3D curvelet transform is used as a sparseness constraint when inverting the primary reflection coefficients, which results in avoiding the prediction subtraction process in the surface-related multiples elimination (SRME) method. The proposed method not only reduces the damage to the effective waves but also improves the elimination of multiples. It is also a wave equation- based method for elimination of surface multiple reflections, which effectively removes surface multiples under complex submarine conditions.展开更多
Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) prov...Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.展开更多
The Lt-norm method is one of the widely used matching filters for adaptive multiple subtraction. When the primaries and multiples are mixed together, the L1-norm method might damage the primaries, leading to poor late...The Lt-norm method is one of the widely used matching filters for adaptive multiple subtraction. When the primaries and multiples are mixed together, the L1-norm method might damage the primaries, leading to poor lateral continuity. In this paper, we propose a constrained L1-norm method for adaptive multiple subtraction by introducing the lateral continuity constraint for the estimated primaries. We measure the lateral continuity using prediction-error filters (PEF). We illustrate our method with the synthetic Pluto dataset. The results show that the constrained L1-norm method can simultaneously attenuate the multiples and preserve the primaries.展开更多
Based on exact penalty function, a new neural network for solving the L1-norm optimization problem is proposed. In comparison with Kennedy and Chua’s network(1988), it has better properties.Based on Bandler’s fault ...Based on exact penalty function, a new neural network for solving the L1-norm optimization problem is proposed. In comparison with Kennedy and Chua’s network(1988), it has better properties.Based on Bandler’s fault location method(1982), a new nonlinearly constrained L1-norm problem is developed. It can be solved with less computing time through only one optimization processing. The proposed neural network can be used to solve the analog diagnosis L1 problem. The validity of the proposed neural networks and the fault location L1 method are illustrated by extensive computer simulations.展开更多
We propose an ?~1 regularized method for numerical differentiation using empirical eigenfunctions. Compared with traditional methods for numerical differentiation, the output of our method can be considered directly ...We propose an ?~1 regularized method for numerical differentiation using empirical eigenfunctions. Compared with traditional methods for numerical differentiation, the output of our method can be considered directly as the derivative of the underlying function. Moreover,our method could produce sparse representations with respect to empirical eigenfunctions.Numerical results show that our method is quite effective.展开更多
文摘In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.
基金supported by the National Natural Science Foundation of China(Grant No.12271325)the Natural Science Basic Research Plan in Shaanxi Province of China(Grant No.2020JM-294).
文摘We discuss freezing of quantum imaginarity based onℓ_(1)-norm.Several properties about a quantity of imaginarity based onℓ_(1)-norm are revealed.For a qubit(2-dimensional)system,we characterize the structure of real quantum operations that allow for freezing the quantity of imaginarity of any state.Furthermore,we characterize the structure of local real operations which can freeze the quantity of imaginarity of a class of N-qubit quantum states.
基金Supported by the National Natural Science Foundation of China(No:69872039)
文摘By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-layer feedforward regular fuzzy neural networks to the fuzzy valued integrably bounded function F : Rn → FcO(R). That is, if the transfer functionσ: R→R is non-polynomial and integrable function on each finite interval, F may be innorm approximated by fuzzy valued functions defined as to anydegree of accuracy. Finally some real examples demonstrate the conclusions.
基金supported by the National Basic Research Program of China。
文摘With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.
基金Financial support of mechanical engineering center of excellence at Roudbar Azad University
文摘A neural network with a feed forward topology and Bayesian regularization training algorithm is used to predict the austenite formation temperatures (At1 and A13) by considering the percentage of alloying elements in chemical composition of steel. The data base used here involves a large variety of different steel types such as struc- tural steels, stainless steels, rail steels, spring steels, high temperature creep resisting steels and tool steels. Scatter diagrams and mean relative error (MRE) statistical criteria are used to compare the performance of developed neural network with the results of Andrew% empirical equations and a feed forward neural network with "gradient descent with momentum" training algorithm. The results showed that Bayesian regularization neural network has the best performance. Also, due to the satisfactory results of the developed neural network, it was used to investigate the effect of the chemical composition on Ac1 and At3 temperatures. Results are in accordance with materials science theories.
基金supported by the National Natural Science Foundation of China(No.61401264,11574192)the Natural Science Research Plan Program in Shaanxi Province of China(No.2015JM6322)the Fundamental Research Funds for the Central Universities(No.GK201603025).
文摘Bioluminescence tomography(BLT)is an important noninvasive optical molecular imaging modality in preclinical research.To improve the image quality,reconstruction algorithms have to deal with the inherent ill-posedness of BLT inverse problem.The sparse characteristic of bioluminescent sources in spatial distribution has been widely explored in BLT and many L1-regularized methods have been investigated due to the sparsity-inducing properties of L1 norm.In this paper,we present a reconstruction method based on L_(1/2) regularization to enhance sparsity of BLT solution and solve the nonconvex L_(1/2) norm problem by converting it to a series of weighted L1 homotopy minimization problems with iteratively updated weights.To assess the performance of the proposed reconstruction algorithm,simulations on a heterogeneous mouse model are designed to compare it with three representative sparse reconstruction algorithms,including the weighted interior-point,L1 homotopy,and the Stagewise Orthogonal Matching Pursuit algorithm.Simulation results show that the proposed method yield stable reconstruction results under different noise levels.Quantitative comparison results demonstrate that the proposed algorithm outperforms the competitor algorithms in location accuracy,multiple-source resolving and image quality.
基金This work was supported by the Fundamental Research Funds for the Central Universities(NE2020004)the National Natural Science Foundation of China(61901213)+3 种基金the Natural Science Foundation of Jiangsu Province(BK20190397)the Aeronautical Science Foundation of China(201920052001)the Young Science and Technology Talent Support Project of Jiangsu Science and Technology Associationthe Foundation of Graduate Innovation Center in Nanjing University of Aeronautics and Astronautics(kfjj20200419).
文摘Tomographic synthetic aperture radar(TomoSAR)imaging exploits the antenna array measurements taken at different elevation aperture to recover the reflectivity function along the elevation direction.In these years,for the sparse elevation distribution,compressive sensing(CS)is a developed favorable technique for the high-resolution elevation reconstruction in TomoSAR by solving an L_(1) regularization problem.However,because the elevation distribution in the forested area is nonsparse,if we want to use CS in the recovery,some basis,such as wavelet,should be exploited in the sparse L_(1/2) representation of the elevation reflectivity function.This paper presents a novel wavelet-based L_(2) regularization CS-TomoSAR imaging method of the forested area.In the proposed method,we first construct a wavelet basis,which can sparsely represent the elevation reflectivity function of the forested area,and then reconstruct the elevation distribution by using the L_(1/2) regularization technique.Compared to the wavelet-based L_(1) regularization TomoSAR imaging,the proposed method can improve the elevation recovered quality efficiently.
基金supported by the National Natural Science Foundation of China(11931009,12271495,11971450,and 12071449)Anhui Initiative in Quantum Information Technologies(AHY150200)the Project of Stable Support for Youth Team in Basic Research Field,Chinese Academy of Sciences(YSBR-001).
文摘On a compact Riemann surface with finite punctures P_(1),…P_(k),we define toric curves as multivalued,totallyunramified holomorphic maps to P^(n)with monodromy in a maximal torus of PSU(n+1).Toric solutions to SU(n+1)Todasystems on X\{P_(1);…;P_(k)}are recognized by the associated toric curves in.We introduce character n-ensembles as-tuples of meromorphic one-forms with simple poles and purely imaginary periods,generating toric curves on minus finitelymany points.On X,we establish a correspondence between character-ensembles and toric solutions to the SU(n+1)system with finitely many cone singularities.Our approach not only broadens seminal solutions with two conesingularities on the Riemann sphere,as classified by Jost-Wang(Int.Math.Res.Not.,2002,(6):277-290)andLin-Wei-Ye(Invent.Math.,2012,190(1):169-207),but also advances beyond the limits of Lin-Yang-Zhong’s existencetheorems(J.Differential Geom.,2020,114(2):337-391)by introducing a new solution class.
文摘Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection.
基金This work was supported in part by the National Natural Science Foundation of China under Grant 61673295the Natural Science Foundation of Tianjin under Grant 18JCYBJC85200by the National College Students’ innovation and entrepreneurship project under Grant 201710060041.
文摘In this paper, polynomial fuzzy neural network classifiers (PFNNCs) is proposed by means of density fuzzy c-means and L2-norm regularization. The overall design of PFNNCs was realized by means of fuzzy rules that come in form of three parts, namely premise part, consequence part and aggregation part. The premise part was developed by density fuzzy c-means that helps determine the apex parameters of membership functions, while the consequence part was realized by means of two types of polynomials including linear and quadratic. L2-norm regularization that can alleviate the overfitting problem was exploited to estimate the parameters of polynomials, which constructed the aggregation part. Experimental results of several data sets demonstrate that the proposed classifiers show higher classification accuracy in comparison with some other classifiers reported in the literature.
基金supported by the National Science and Technology Major Project (No.2011ZX05023-005-008)
文摘In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm regularization, and use alternating optimization to directly estimate the primary reflection coefficients and source wavelet. The 3D curvelet transform is used as a sparseness constraint when inverting the primary reflection coefficients, which results in avoiding the prediction subtraction process in the surface-related multiples elimination (SRME) method. The proposed method not only reduces the damage to the effective waves but also improves the elimination of multiples. It is also a wave equation- based method for elimination of surface multiple reflections, which effectively removes surface multiples under complex submarine conditions.
基金financially supported by The 2011 Prospective Research Project of SINOPEC(P11096)
文摘Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.
基金This work is sponsored by National Natural Science Foundation of China (No. 40874056), Important National Science & Technology Specific Projects 2008ZX05023-005-004, and the NCET Fund.Acknowledgements The authors are grateful to Liu Yang, and Zhu Sheng-wang for their constructive remarks on this manuscript.
文摘The Lt-norm method is one of the widely used matching filters for adaptive multiple subtraction. When the primaries and multiples are mixed together, the L1-norm method might damage the primaries, leading to poor lateral continuity. In this paper, we propose a constrained L1-norm method for adaptive multiple subtraction by introducing the lateral continuity constraint for the estimated primaries. We measure the lateral continuity using prediction-error filters (PEF). We illustrate our method with the synthetic Pluto dataset. The results show that the constrained L1-norm method can simultaneously attenuate the multiples and preserve the primaries.
基金Supported by Doctoral Special Fund of State Education Commissionthe National Natural Science Foundation of China,Grant No.59477001 and No.59707002
文摘Based on exact penalty function, a new neural network for solving the L1-norm optimization problem is proposed. In comparison with Kennedy and Chua’s network(1988), it has better properties.Based on Bandler’s fault location method(1982), a new nonlinearly constrained L1-norm problem is developed. It can be solved with less computing time through only one optimization processing. The proposed neural network can be used to solve the analog diagnosis L1 problem. The validity of the proposed neural networks and the fault location L1 method are illustrated by extensive computer simulations.
基金Supported by the National Nature Science Foundation of China(Grant Nos.11301052,11301045,11271060,11601064,11671068)the Fundamental Research Funds for the Central Universities(Grant No.DUT16LK33)the Fundamental Research of Civil Aircraft(Grant No.MJ-F-2012-04)
文摘We propose an ?~1 regularized method for numerical differentiation using empirical eigenfunctions. Compared with traditional methods for numerical differentiation, the output of our method can be considered directly as the derivative of the underlying function. Moreover,our method could produce sparse representations with respect to empirical eigenfunctions.Numerical results show that our method is quite effective.